2026-01-30 02:35:51.319087 | Job console starting 2026-01-30 02:35:51.336843 | Updating git repos 2026-01-30 02:35:51.415416 | Cloning repos into workspace 2026-01-30 02:35:51.613470 | Restoring repo states 2026-01-30 02:35:51.638589 | Merging changes 2026-01-30 02:35:51.638610 | Checking out repos 2026-01-30 02:35:51.854912 | Preparing playbooks 2026-01-30 02:35:52.571902 | Running Ansible setup 2026-01-30 02:35:56.998800 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-30 02:35:57.739533 | 2026-01-30 02:35:57.739690 | PLAY [Base pre] 2026-01-30 02:35:57.756341 | 2026-01-30 02:35:57.756477 | TASK [Setup log path fact] 2026-01-30 02:35:57.779073 | orchestrator | ok 2026-01-30 02:35:57.796400 | 2026-01-30 02:35:57.796783 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-30 02:35:57.838800 | orchestrator | ok 2026-01-30 02:35:57.853932 | 2026-01-30 02:35:57.854054 | TASK [emit-job-header : Print job information] 2026-01-30 02:35:57.904527 | # Job Information 2026-01-30 02:35:57.904708 | Ansible Version: 2.16.14 2026-01-30 02:35:57.904744 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-01-30 02:35:57.904778 | Pipeline: periodic-midnight 2026-01-30 02:35:57.904800 | Executor: 521e9411259a 2026-01-30 02:35:57.904821 | Triggered by: https://github.com/osism/testbed 2026-01-30 02:35:57.904843 | Event ID: acf0cf69c55840b3b7bc68fd3079c29a 2026-01-30 02:35:57.911589 | 2026-01-30 02:35:57.911700 | LOOP [emit-job-header : Print node information] 2026-01-30 02:35:58.031057 | orchestrator | ok: 2026-01-30 02:35:58.031379 | orchestrator | # Node Information 2026-01-30 02:35:58.031438 | orchestrator | Inventory Hostname: orchestrator 2026-01-30 02:35:58.031478 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-30 02:35:58.031510 | orchestrator | Username: zuul-testbed05 2026-01-30 02:35:58.031540 | orchestrator | Distro: Debian 12.13 2026-01-30 02:35:58.031574 | orchestrator | Provider: static-testbed 2026-01-30 02:35:58.031605 | orchestrator | Region: 2026-01-30 02:35:58.031634 | orchestrator | Label: testbed-orchestrator 2026-01-30 02:35:58.031662 | orchestrator | Product Name: OpenStack Nova 2026-01-30 02:35:58.031690 | orchestrator | Interface IP: 81.163.193.140 2026-01-30 02:35:58.051977 | 2026-01-30 02:35:58.052103 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-30 02:35:58.581395 | orchestrator -> localhost | changed 2026-01-30 02:35:58.594788 | 2026-01-30 02:35:58.594961 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-30 02:35:59.720001 | orchestrator -> localhost | changed 2026-01-30 02:35:59.736589 | 2026-01-30 02:35:59.736707 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-30 02:36:00.047131 | orchestrator -> localhost | ok 2026-01-30 02:36:00.056957 | 2026-01-30 02:36:00.057099 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-30 02:36:00.091055 | orchestrator | ok 2026-01-30 02:36:00.109062 | orchestrator | included: /var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-30 02:36:00.117599 | 2026-01-30 02:36:00.117861 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-30 02:36:01.789960 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-30 02:36:01.790268 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/1d7e04a1686140da853285cbef7032ad_id_rsa 2026-01-30 02:36:01.790316 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/1d7e04a1686140da853285cbef7032ad_id_rsa.pub 2026-01-30 02:36:01.790343 | orchestrator -> localhost | The key fingerprint is: 2026-01-30 02:36:01.790368 | orchestrator -> localhost | SHA256:inTUvO0j+feY2uA7n75CvwLrEIgWOa8Cfdk534hWDcI zuul-build-sshkey 2026-01-30 02:36:01.790390 | orchestrator -> localhost | The key's randomart image is: 2026-01-30 02:36:01.790421 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-30 02:36:01.790443 | orchestrator -> localhost | | | 2026-01-30 02:36:01.790465 | orchestrator -> localhost | | . . o | 2026-01-30 02:36:01.790485 | orchestrator -> localhost | | + E + | 2026-01-30 02:36:01.790505 | orchestrator -> localhost | | . = = o = | 2026-01-30 02:36:01.790524 | orchestrator -> localhost | |. + * * S o | 2026-01-30 02:36:01.790553 | orchestrator -> localhost | |.. + o B.=. | 2026-01-30 02:36:01.790575 | orchestrator -> localhost | |. . . = =+=. | 2026-01-30 02:36:01.790594 | orchestrator -> localhost | | . . ..+++o+ | 2026-01-30 02:36:01.790614 | orchestrator -> localhost | | .. =XX+. | 2026-01-30 02:36:01.790634 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-30 02:36:01.790712 | orchestrator -> localhost | ok: Runtime: 0:00:01.034910 2026-01-30 02:36:01.817941 | 2026-01-30 02:36:01.818076 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-30 02:36:01.850579 | orchestrator | ok 2026-01-30 02:36:01.866069 | orchestrator | included: /var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-30 02:36:01.886885 | 2026-01-30 02:36:01.888087 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-30 02:36:01.926646 | orchestrator | skipping: Conditional result was False 2026-01-30 02:36:01.958187 | 2026-01-30 02:36:01.958330 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-30 02:36:02.644248 | orchestrator | changed 2026-01-30 02:36:02.652619 | 2026-01-30 02:36:02.652757 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-30 02:36:02.941600 | orchestrator | ok 2026-01-30 02:36:02.949245 | 2026-01-30 02:36:02.949367 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-30 02:36:03.362903 | orchestrator | ok 2026-01-30 02:36:03.376766 | 2026-01-30 02:36:03.376891 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-30 02:36:03.835898 | orchestrator | ok 2026-01-30 02:36:03.842494 | 2026-01-30 02:36:03.842616 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-30 02:36:03.866392 | orchestrator | skipping: Conditional result was False 2026-01-30 02:36:03.873503 | 2026-01-30 02:36:03.873618 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-30 02:36:04.323932 | orchestrator -> localhost | changed 2026-01-30 02:36:04.343789 | 2026-01-30 02:36:04.344076 | TASK [add-build-sshkey : Add back temp key] 2026-01-30 02:36:04.692953 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/1d7e04a1686140da853285cbef7032ad_id_rsa (zuul-build-sshkey) 2026-01-30 02:36:04.693229 | orchestrator -> localhost | ok: Runtime: 0:00:00.010867 2026-01-30 02:36:04.700676 | 2026-01-30 02:36:04.700803 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-30 02:36:05.137826 | orchestrator | ok 2026-01-30 02:36:05.143951 | 2026-01-30 02:36:05.144071 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-30 02:36:05.178300 | orchestrator | skipping: Conditional result was False 2026-01-30 02:36:05.249286 | 2026-01-30 02:36:05.249417 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-30 02:36:05.654001 | orchestrator | ok 2026-01-30 02:36:05.665951 | 2026-01-30 02:36:05.666087 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-30 02:36:05.705695 | orchestrator | ok 2026-01-30 02:36:05.713534 | 2026-01-30 02:36:05.713669 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-30 02:36:06.135432 | orchestrator -> localhost | ok 2026-01-30 02:36:06.143061 | 2026-01-30 02:36:06.143218 | TASK [validate-host : Collect information about the host] 2026-01-30 02:36:07.417878 | orchestrator | ok 2026-01-30 02:36:07.431701 | 2026-01-30 02:36:07.431825 | TASK [validate-host : Sanitize hostname] 2026-01-30 02:36:07.485936 | orchestrator | ok 2026-01-30 02:36:07.494762 | 2026-01-30 02:36:07.494904 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-30 02:36:08.321965 | orchestrator -> localhost | changed 2026-01-30 02:36:08.328730 | 2026-01-30 02:36:08.328846 | TASK [validate-host : Collect information about zuul worker] 2026-01-30 02:36:08.782716 | orchestrator | ok 2026-01-30 02:36:08.788485 | 2026-01-30 02:36:08.788598 | TASK [validate-host : Write out all zuul information for each host] 2026-01-30 02:36:09.334448 | orchestrator -> localhost | changed 2026-01-30 02:36:09.345267 | 2026-01-30 02:36:09.345384 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-30 02:36:09.628039 | orchestrator | ok 2026-01-30 02:36:09.645915 | 2026-01-30 02:36:09.646051 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-30 02:36:27.169188 | orchestrator | changed: 2026-01-30 02:36:27.169471 | orchestrator | .d..t...... src/ 2026-01-30 02:36:27.169522 | orchestrator | .d..t...... src/github.com/ 2026-01-30 02:36:27.169558 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-30 02:36:27.169590 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-30 02:36:27.169620 | orchestrator | RedHat.yml 2026-01-30 02:36:27.186924 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-30 02:36:27.186942 | orchestrator | RedHat.yml 2026-01-30 02:36:27.186996 | orchestrator | = 1.53.0"... 2026-01-30 02:36:37.562357 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-01-30 02:36:37.730383 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-30 02:36:38.207569 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-30 02:36:38.564061 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-30 02:36:39.527435 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-30 02:36:39.601075 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-01-30 02:36:41.104892 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-01-30 02:36:41.104989 | orchestrator | 2026-01-30 02:36:41.105008 | orchestrator | Providers are signed by their developers. 2026-01-30 02:36:41.105022 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-30 02:36:41.105077 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-30 02:36:41.105110 | orchestrator | 2026-01-30 02:36:41.105123 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-30 02:36:41.105135 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-30 02:36:41.105166 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-30 02:36:41.105179 | orchestrator | you run "tofu init" in the future. 2026-01-30 02:36:41.105281 | orchestrator | 2026-01-30 02:36:41.105322 | orchestrator | OpenTofu has been successfully initialized! 2026-01-30 02:36:41.105345 | orchestrator | 2026-01-30 02:36:41.105357 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-30 02:36:41.105368 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-30 02:36:41.105380 | orchestrator | should now work. 2026-01-30 02:36:41.105391 | orchestrator | 2026-01-30 02:36:41.105402 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-30 02:36:41.105414 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-30 02:36:41.105426 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-30 02:36:41.325522 | orchestrator | Created and switched to workspace "ci"! 2026-01-30 02:36:41.325668 | orchestrator | 2026-01-30 02:36:41.325695 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-30 02:36:41.325713 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-30 02:36:41.325729 | orchestrator | for this configuration. 2026-01-30 02:36:41.460742 | orchestrator | ci.auto.tfvars 2026-01-30 02:36:41.464356 | orchestrator | default_custom.tf 2026-01-30 02:36:43.629929 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-30 02:36:44.204905 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-30 02:36:44.450260 | orchestrator | 2026-01-30 02:36:44.450375 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-30 02:36:44.450394 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-30 02:36:44.450406 | orchestrator | + create 2026-01-30 02:36:44.450417 | orchestrator | <= read (data resources) 2026-01-30 02:36:44.450429 | orchestrator | 2026-01-30 02:36:44.450439 | orchestrator | OpenTofu will perform the following actions: 2026-01-30 02:36:44.450462 | orchestrator | 2026-01-30 02:36:44.450473 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-30 02:36:44.450483 | orchestrator | # (config refers to values not yet known) 2026-01-30 02:36:44.450493 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-30 02:36:44.450506 | orchestrator | + checksum = (known after apply) 2026-01-30 02:36:44.450524 | orchestrator | + created_at = (known after apply) 2026-01-30 02:36:44.450540 | orchestrator | + file = (known after apply) 2026-01-30 02:36:44.450556 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.450615 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.450634 | orchestrator | + min_disk_gb = (known after apply) 2026-01-30 02:36:44.450646 | orchestrator | + min_ram_mb = (known after apply) 2026-01-30 02:36:44.450656 | orchestrator | + most_recent = true 2026-01-30 02:36:44.450666 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.450676 | orchestrator | + protected = (known after apply) 2026-01-30 02:36:44.450685 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.450699 | orchestrator | + schema = (known after apply) 2026-01-30 02:36:44.450709 | orchestrator | + size_bytes = (known after apply) 2026-01-30 02:36:44.450719 | orchestrator | + tags = (known after apply) 2026-01-30 02:36:44.450728 | orchestrator | + updated_at = (known after apply) 2026-01-30 02:36:44.450738 | orchestrator | } 2026-01-30 02:36:44.450748 | orchestrator | 2026-01-30 02:36:44.450758 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-30 02:36:44.450768 | orchestrator | # (config refers to values not yet known) 2026-01-30 02:36:44.450778 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-30 02:36:44.450788 | orchestrator | + checksum = (known after apply) 2026-01-30 02:36:44.450797 | orchestrator | + created_at = (known after apply) 2026-01-30 02:36:44.450807 | orchestrator | + file = (known after apply) 2026-01-30 02:36:44.450816 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.450826 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.450836 | orchestrator | + min_disk_gb = (known after apply) 2026-01-30 02:36:44.450858 | orchestrator | + min_ram_mb = (known after apply) 2026-01-30 02:36:44.450868 | orchestrator | + most_recent = true 2026-01-30 02:36:44.450885 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.450902 | orchestrator | + protected = (known after apply) 2026-01-30 02:36:44.450918 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.450935 | orchestrator | + schema = (known after apply) 2026-01-30 02:36:44.450951 | orchestrator | + size_bytes = (known after apply) 2026-01-30 02:36:44.450969 | orchestrator | + tags = (known after apply) 2026-01-30 02:36:44.450985 | orchestrator | + updated_at = (known after apply) 2026-01-30 02:36:44.451000 | orchestrator | } 2026-01-30 02:36:44.451024 | orchestrator | 2026-01-30 02:36:44.451064 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-30 02:36:44.451074 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-30 02:36:44.451084 | orchestrator | + content = (known after apply) 2026-01-30 02:36:44.451095 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-30 02:36:44.451104 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-30 02:36:44.451114 | orchestrator | + content_md5 = (known after apply) 2026-01-30 02:36:44.451124 | orchestrator | + content_sha1 = (known after apply) 2026-01-30 02:36:44.451133 | orchestrator | + content_sha256 = (known after apply) 2026-01-30 02:36:44.451171 | orchestrator | + content_sha512 = (known after apply) 2026-01-30 02:36:44.451181 | orchestrator | + directory_permission = "0777" 2026-01-30 02:36:44.451191 | orchestrator | + file_permission = "0644" 2026-01-30 02:36:44.451200 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-30 02:36:44.451210 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451220 | orchestrator | } 2026-01-30 02:36:44.451229 | orchestrator | 2026-01-30 02:36:44.451244 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-30 02:36:44.451261 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-30 02:36:44.451276 | orchestrator | + content = (known after apply) 2026-01-30 02:36:44.451293 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-30 02:36:44.451310 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-30 02:36:44.451326 | orchestrator | + content_md5 = (known after apply) 2026-01-30 02:36:44.451343 | orchestrator | + content_sha1 = (known after apply) 2026-01-30 02:36:44.451359 | orchestrator | + content_sha256 = (known after apply) 2026-01-30 02:36:44.451373 | orchestrator | + content_sha512 = (known after apply) 2026-01-30 02:36:44.451383 | orchestrator | + directory_permission = "0777" 2026-01-30 02:36:44.451393 | orchestrator | + file_permission = "0644" 2026-01-30 02:36:44.451413 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-30 02:36:44.451423 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451432 | orchestrator | } 2026-01-30 02:36:44.451442 | orchestrator | 2026-01-30 02:36:44.451463 | orchestrator | # local_file.inventory will be created 2026-01-30 02:36:44.451473 | orchestrator | + resource "local_file" "inventory" { 2026-01-30 02:36:44.451483 | orchestrator | + content = (known after apply) 2026-01-30 02:36:44.451493 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-30 02:36:44.451502 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-30 02:36:44.451512 | orchestrator | + content_md5 = (known after apply) 2026-01-30 02:36:44.451521 | orchestrator | + content_sha1 = (known after apply) 2026-01-30 02:36:44.451532 | orchestrator | + content_sha256 = (known after apply) 2026-01-30 02:36:44.451541 | orchestrator | + content_sha512 = (known after apply) 2026-01-30 02:36:44.451551 | orchestrator | + directory_permission = "0777" 2026-01-30 02:36:44.451560 | orchestrator | + file_permission = "0644" 2026-01-30 02:36:44.451570 | orchestrator | + filename = "inventory.ci" 2026-01-30 02:36:44.451580 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451589 | orchestrator | } 2026-01-30 02:36:44.451600 | orchestrator | 2026-01-30 02:36:44.451616 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-30 02:36:44.451633 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-30 02:36:44.451649 | orchestrator | + content = (sensitive value) 2026-01-30 02:36:44.451665 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-30 02:36:44.451681 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-30 02:36:44.451698 | orchestrator | + content_md5 = (known after apply) 2026-01-30 02:36:44.451714 | orchestrator | + content_sha1 = (known after apply) 2026-01-30 02:36:44.451731 | orchestrator | + content_sha256 = (known after apply) 2026-01-30 02:36:44.451742 | orchestrator | + content_sha512 = (known after apply) 2026-01-30 02:36:44.451751 | orchestrator | + directory_permission = "0700" 2026-01-30 02:36:44.451761 | orchestrator | + file_permission = "0600" 2026-01-30 02:36:44.451771 | orchestrator | + filename = ".id_rsa.ci" 2026-01-30 02:36:44.451781 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451790 | orchestrator | } 2026-01-30 02:36:44.451800 | orchestrator | 2026-01-30 02:36:44.451810 | orchestrator | # null_resource.node_semaphore will be created 2026-01-30 02:36:44.451819 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-30 02:36:44.451829 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451839 | orchestrator | } 2026-01-30 02:36:44.451848 | orchestrator | 2026-01-30 02:36:44.451858 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-30 02:36:44.451868 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-30 02:36:44.451877 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.451887 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.451897 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.451906 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.451916 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.451925 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-30 02:36:44.451935 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.451944 | orchestrator | + size = 80 2026-01-30 02:36:44.451954 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.451964 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.451975 | orchestrator | } 2026-01-30 02:36:44.451992 | orchestrator | 2026-01-30 02:36:44.452007 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-30 02:36:44.452023 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452064 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452081 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452098 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452117 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452128 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452138 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-30 02:36:44.452147 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452157 | orchestrator | + size = 80 2026-01-30 02:36:44.452167 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452177 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452187 | orchestrator | } 2026-01-30 02:36:44.452196 | orchestrator | 2026-01-30 02:36:44.452206 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-30 02:36:44.452216 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452226 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452246 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452256 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452265 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452275 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452285 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-30 02:36:44.452294 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452304 | orchestrator | + size = 80 2026-01-30 02:36:44.452314 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452324 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452333 | orchestrator | } 2026-01-30 02:36:44.452343 | orchestrator | 2026-01-30 02:36:44.452354 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-30 02:36:44.452371 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452387 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452402 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452419 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452436 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452454 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452465 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-30 02:36:44.452474 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452484 | orchestrator | + size = 80 2026-01-30 02:36:44.452494 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452503 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452513 | orchestrator | } 2026-01-30 02:36:44.452523 | orchestrator | 2026-01-30 02:36:44.452533 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-30 02:36:44.452542 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452552 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452562 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452571 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452581 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452590 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452606 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-30 02:36:44.452616 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452626 | orchestrator | + size = 80 2026-01-30 02:36:44.452636 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452646 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452655 | orchestrator | } 2026-01-30 02:36:44.452665 | orchestrator | 2026-01-30 02:36:44.452675 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-30 02:36:44.452684 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452694 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452704 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452716 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452742 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452758 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452776 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-30 02:36:44.452792 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452809 | orchestrator | + size = 80 2026-01-30 02:36:44.452825 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452835 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452845 | orchestrator | } 2026-01-30 02:36:44.452855 | orchestrator | 2026-01-30 02:36:44.452865 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-30 02:36:44.452875 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-30 02:36:44.452884 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.452894 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.452904 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.452913 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.452923 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.452933 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-30 02:36:44.452943 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.452953 | orchestrator | + size = 80 2026-01-30 02:36:44.452962 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.452972 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.452982 | orchestrator | } 2026-01-30 02:36:44.452992 | orchestrator | 2026-01-30 02:36:44.453001 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-30 02:36:44.453012 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453022 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453085 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453103 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453120 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453138 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-30 02:36:44.453155 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453174 | orchestrator | + size = 20 2026-01-30 02:36:44.453185 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453195 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453205 | orchestrator | } 2026-01-30 02:36:44.453214 | orchestrator | 2026-01-30 02:36:44.453224 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-30 02:36:44.453234 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453243 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453253 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453263 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453273 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453282 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-30 02:36:44.453292 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453302 | orchestrator | + size = 20 2026-01-30 02:36:44.453311 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453321 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453331 | orchestrator | } 2026-01-30 02:36:44.453341 | orchestrator | 2026-01-30 02:36:44.453351 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-30 02:36:44.453360 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453377 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453387 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453397 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453407 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453416 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-30 02:36:44.453426 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453452 | orchestrator | + size = 20 2026-01-30 02:36:44.453468 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453484 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453501 | orchestrator | } 2026-01-30 02:36:44.453519 | orchestrator | 2026-01-30 02:36:44.453535 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-30 02:36:44.453545 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453555 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453565 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453574 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453584 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453593 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-30 02:36:44.453603 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453612 | orchestrator | + size = 20 2026-01-30 02:36:44.453622 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453632 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453641 | orchestrator | } 2026-01-30 02:36:44.453651 | orchestrator | 2026-01-30 02:36:44.453660 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-30 02:36:44.453670 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453679 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453689 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453698 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453708 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453718 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-30 02:36:44.453727 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453743 | orchestrator | + size = 20 2026-01-30 02:36:44.453753 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453763 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453772 | orchestrator | } 2026-01-30 02:36:44.453782 | orchestrator | 2026-01-30 02:36:44.453795 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-30 02:36:44.453812 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453828 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.453844 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.453861 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.453878 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.453895 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-30 02:36:44.453909 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.453918 | orchestrator | + size = 20 2026-01-30 02:36:44.453928 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.453938 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.453947 | orchestrator | } 2026-01-30 02:36:44.453957 | orchestrator | 2026-01-30 02:36:44.453966 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-30 02:36:44.453976 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.453986 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.454003 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.454118 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.454133 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.454142 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-30 02:36:44.454152 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.454165 | orchestrator | + size = 20 2026-01-30 02:36:44.454182 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.454197 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.454214 | orchestrator | } 2026-01-30 02:36:44.454231 | orchestrator | 2026-01-30 02:36:44.454248 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-30 02:36:44.454264 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.454292 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.454302 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.454312 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.454322 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.454331 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-30 02:36:44.454341 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.454350 | orchestrator | + size = 20 2026-01-30 02:36:44.454360 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.454370 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.454380 | orchestrator | } 2026-01-30 02:36:44.454389 | orchestrator | 2026-01-30 02:36:44.454399 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-30 02:36:44.454409 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-30 02:36:44.454418 | orchestrator | + attachment = (known after apply) 2026-01-30 02:36:44.454426 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.454433 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.454441 | orchestrator | + metadata = (known after apply) 2026-01-30 02:36:44.454449 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-30 02:36:44.454457 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.454465 | orchestrator | + size = 20 2026-01-30 02:36:44.454473 | orchestrator | + volume_retype_policy = "never" 2026-01-30 02:36:44.454481 | orchestrator | + volume_type = "ssd" 2026-01-30 02:36:44.454489 | orchestrator | } 2026-01-30 02:36:44.454497 | orchestrator | 2026-01-30 02:36:44.454505 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-30 02:36:44.454513 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-30 02:36:44.454521 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.454534 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.454547 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.454560 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.454572 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.454583 | orchestrator | + config_drive = true 2026-01-30 02:36:44.454602 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.454613 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.454624 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-30 02:36:44.454635 | orchestrator | + force_delete = false 2026-01-30 02:36:44.454646 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.454658 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.454670 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.454683 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.454696 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.454710 | orchestrator | + name = "testbed-manager" 2026-01-30 02:36:44.454724 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.454738 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.454746 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.454754 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.454762 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.454770 | orchestrator | + user_data = (sensitive value) 2026-01-30 02:36:44.454777 | orchestrator | 2026-01-30 02:36:44.454785 | orchestrator | + block_device { 2026-01-30 02:36:44.454793 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.454801 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.454817 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.454831 | orchestrator | + multiattach = false 2026-01-30 02:36:44.454844 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.454856 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.454876 | orchestrator | } 2026-01-30 02:36:44.454889 | orchestrator | 2026-01-30 02:36:44.454901 | orchestrator | + network { 2026-01-30 02:36:44.454915 | orchestrator | + access_network = false 2026-01-30 02:36:44.454929 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.454940 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.454948 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.454956 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.454963 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.454971 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.454979 | orchestrator | } 2026-01-30 02:36:44.454987 | orchestrator | } 2026-01-30 02:36:44.454994 | orchestrator | 2026-01-30 02:36:44.455008 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-30 02:36:44.455022 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.455057 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.455071 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.455084 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.455098 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.455111 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.455124 | orchestrator | + config_drive = true 2026-01-30 02:36:44.455132 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.455140 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.455148 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.455156 | orchestrator | + force_delete = false 2026-01-30 02:36:44.455164 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.455172 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.455179 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.455187 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.455195 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.455203 | orchestrator | + name = "testbed-node-0" 2026-01-30 02:36:44.455211 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.455218 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.455226 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.455234 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.455242 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.455250 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.455258 | orchestrator | 2026-01-30 02:36:44.455265 | orchestrator | + block_device { 2026-01-30 02:36:44.455273 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.455281 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.455289 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.455297 | orchestrator | + multiattach = false 2026-01-30 02:36:44.455305 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.455313 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.455320 | orchestrator | } 2026-01-30 02:36:44.455328 | orchestrator | 2026-01-30 02:36:44.455336 | orchestrator | + network { 2026-01-30 02:36:44.455344 | orchestrator | + access_network = false 2026-01-30 02:36:44.455352 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.455360 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.455368 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.455382 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.455395 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.455408 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.455421 | orchestrator | } 2026-01-30 02:36:44.455435 | orchestrator | } 2026-01-30 02:36:44.455448 | orchestrator | 2026-01-30 02:36:44.455457 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-30 02:36:44.455465 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.455472 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.455487 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.455495 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.455503 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.455511 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.455519 | orchestrator | + config_drive = true 2026-01-30 02:36:44.455526 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.455534 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.455542 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.455550 | orchestrator | + force_delete = false 2026-01-30 02:36:44.455558 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.455565 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.455573 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.455581 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.455589 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.455597 | orchestrator | + name = "testbed-node-1" 2026-01-30 02:36:44.455611 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.455619 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.455627 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.455635 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.455643 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.455651 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.455658 | orchestrator | 2026-01-30 02:36:44.455666 | orchestrator | + block_device { 2026-01-30 02:36:44.455674 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.455682 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.455690 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.455698 | orchestrator | + multiattach = false 2026-01-30 02:36:44.455706 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.455714 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.455728 | orchestrator | } 2026-01-30 02:36:44.455741 | orchestrator | 2026-01-30 02:36:44.455754 | orchestrator | + network { 2026-01-30 02:36:44.455767 | orchestrator | + access_network = false 2026-01-30 02:36:44.455780 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.455794 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.455806 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.455813 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.455821 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.455829 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.455837 | orchestrator | } 2026-01-30 02:36:44.455845 | orchestrator | } 2026-01-30 02:36:44.455853 | orchestrator | 2026-01-30 02:36:44.455860 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-30 02:36:44.455868 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.455876 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.455884 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.455893 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.455901 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.455914 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.455922 | orchestrator | + config_drive = true 2026-01-30 02:36:44.455930 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.455938 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.455946 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.455954 | orchestrator | + force_delete = false 2026-01-30 02:36:44.455962 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.455969 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.455977 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.455992 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.456000 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.456008 | orchestrator | + name = "testbed-node-2" 2026-01-30 02:36:44.456016 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.456023 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.456058 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.456073 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.456085 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.456098 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.456112 | orchestrator | 2026-01-30 02:36:44.456125 | orchestrator | + block_device { 2026-01-30 02:36:44.456139 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.456151 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.456159 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.456167 | orchestrator | + multiattach = false 2026-01-30 02:36:44.456175 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.456183 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.456191 | orchestrator | } 2026-01-30 02:36:44.456199 | orchestrator | 2026-01-30 02:36:44.456207 | orchestrator | + network { 2026-01-30 02:36:44.456214 | orchestrator | + access_network = false 2026-01-30 02:36:44.456222 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.456230 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.456238 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.456246 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.456253 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.456261 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.456269 | orchestrator | } 2026-01-30 02:36:44.456277 | orchestrator | } 2026-01-30 02:36:44.456285 | orchestrator | 2026-01-30 02:36:44.456293 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-30 02:36:44.456301 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.456309 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.456316 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.456324 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.456332 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.456340 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.456348 | orchestrator | + config_drive = true 2026-01-30 02:36:44.456355 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.456363 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.456371 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.456379 | orchestrator | + force_delete = false 2026-01-30 02:36:44.456387 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.456394 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.456402 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.456410 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.456423 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.456448 | orchestrator | + name = "testbed-node-3" 2026-01-30 02:36:44.456462 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.456475 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.456489 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.456497 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.456505 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.456513 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.456521 | orchestrator | 2026-01-30 02:36:44.456529 | orchestrator | + block_device { 2026-01-30 02:36:44.456542 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.456550 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.456558 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.456582 | orchestrator | + multiattach = false 2026-01-30 02:36:44.456590 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.456598 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.456606 | orchestrator | } 2026-01-30 02:36:44.456614 | orchestrator | 2026-01-30 02:36:44.456622 | orchestrator | + network { 2026-01-30 02:36:44.456630 | orchestrator | + access_network = false 2026-01-30 02:36:44.456638 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.456645 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.456653 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.456661 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.456669 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.456676 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.456684 | orchestrator | } 2026-01-30 02:36:44.456692 | orchestrator | } 2026-01-30 02:36:44.456700 | orchestrator | 2026-01-30 02:36:44.456708 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-30 02:36:44.456716 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.456724 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.456731 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.456739 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.456747 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.456755 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.456767 | orchestrator | + config_drive = true 2026-01-30 02:36:44.456781 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.456794 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.456807 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.456820 | orchestrator | + force_delete = false 2026-01-30 02:36:44.456834 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.456844 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.456852 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.456860 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.456868 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.456876 | orchestrator | + name = "testbed-node-4" 2026-01-30 02:36:44.456884 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.456892 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.456899 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.456907 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.456915 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.456923 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.456930 | orchestrator | 2026-01-30 02:36:44.456938 | orchestrator | + block_device { 2026-01-30 02:36:44.456946 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.456954 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.456962 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.456970 | orchestrator | + multiattach = false 2026-01-30 02:36:44.456978 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.456985 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.456993 | orchestrator | } 2026-01-30 02:36:44.457001 | orchestrator | 2026-01-30 02:36:44.457009 | orchestrator | + network { 2026-01-30 02:36:44.457017 | orchestrator | + access_network = false 2026-01-30 02:36:44.457082 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.457092 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.457101 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.457111 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.457125 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.457138 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.457152 | orchestrator | } 2026-01-30 02:36:44.457166 | orchestrator | } 2026-01-30 02:36:44.457188 | orchestrator | 2026-01-30 02:36:44.457200 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-30 02:36:44.457207 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-30 02:36:44.457214 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-30 02:36:44.457220 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-30 02:36:44.457227 | orchestrator | + all_metadata = (known after apply) 2026-01-30 02:36:44.457233 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.457240 | orchestrator | + availability_zone = "nova" 2026-01-30 02:36:44.457246 | orchestrator | + config_drive = true 2026-01-30 02:36:44.457253 | orchestrator | + created = (known after apply) 2026-01-30 02:36:44.457260 | orchestrator | + flavor_id = (known after apply) 2026-01-30 02:36:44.457266 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-30 02:36:44.457273 | orchestrator | + force_delete = false 2026-01-30 02:36:44.457284 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-30 02:36:44.457291 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457298 | orchestrator | + image_id = (known after apply) 2026-01-30 02:36:44.457304 | orchestrator | + image_name = (known after apply) 2026-01-30 02:36:44.457311 | orchestrator | + key_pair = "testbed" 2026-01-30 02:36:44.457317 | orchestrator | + name = "testbed-node-5" 2026-01-30 02:36:44.457324 | orchestrator | + power_state = "active" 2026-01-30 02:36:44.457331 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457337 | orchestrator | + security_groups = (known after apply) 2026-01-30 02:36:44.457344 | orchestrator | + stop_before_destroy = false 2026-01-30 02:36:44.457350 | orchestrator | + updated = (known after apply) 2026-01-30 02:36:44.457357 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-30 02:36:44.457364 | orchestrator | 2026-01-30 02:36:44.457370 | orchestrator | + block_device { 2026-01-30 02:36:44.457377 | orchestrator | + boot_index = 0 2026-01-30 02:36:44.457384 | orchestrator | + delete_on_termination = false 2026-01-30 02:36:44.457390 | orchestrator | + destination_type = "volume" 2026-01-30 02:36:44.457397 | orchestrator | + multiattach = false 2026-01-30 02:36:44.457403 | orchestrator | + source_type = "volume" 2026-01-30 02:36:44.457410 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.457417 | orchestrator | } 2026-01-30 02:36:44.457423 | orchestrator | 2026-01-30 02:36:44.457430 | orchestrator | + network { 2026-01-30 02:36:44.457436 | orchestrator | + access_network = false 2026-01-30 02:36:44.457443 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-30 02:36:44.457450 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-30 02:36:44.457457 | orchestrator | + mac = (known after apply) 2026-01-30 02:36:44.457469 | orchestrator | + name = (known after apply) 2026-01-30 02:36:44.457489 | orchestrator | + port = (known after apply) 2026-01-30 02:36:44.457501 | orchestrator | + uuid = (known after apply) 2026-01-30 02:36:44.457512 | orchestrator | } 2026-01-30 02:36:44.457525 | orchestrator | } 2026-01-30 02:36:44.457533 | orchestrator | 2026-01-30 02:36:44.457540 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-30 02:36:44.457547 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-30 02:36:44.457554 | orchestrator | + fingerprint = (known after apply) 2026-01-30 02:36:44.457561 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457567 | orchestrator | + name = "testbed" 2026-01-30 02:36:44.457574 | orchestrator | + private_key = (sensitive value) 2026-01-30 02:36:44.457580 | orchestrator | + public_key = (known after apply) 2026-01-30 02:36:44.457587 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457594 | orchestrator | + user_id = (known after apply) 2026-01-30 02:36:44.457600 | orchestrator | } 2026-01-30 02:36:44.457607 | orchestrator | 2026-01-30 02:36:44.457614 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-30 02:36:44.457620 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457632 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457639 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457646 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.457652 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457659 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.457666 | orchestrator | } 2026-01-30 02:36:44.457672 | orchestrator | 2026-01-30 02:36:44.457679 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-30 02:36:44.457686 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457692 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457699 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457705 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.457712 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457719 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.457725 | orchestrator | } 2026-01-30 02:36:44.457732 | orchestrator | 2026-01-30 02:36:44.457739 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-30 02:36:44.457745 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457752 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457759 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457765 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.457772 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457779 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.457785 | orchestrator | } 2026-01-30 02:36:44.457792 | orchestrator | 2026-01-30 02:36:44.457799 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-30 02:36:44.457811 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457822 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457832 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457844 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.457856 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457867 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.457874 | orchestrator | } 2026-01-30 02:36:44.457881 | orchestrator | 2026-01-30 02:36:44.457888 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-30 02:36:44.457895 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457901 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457908 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.457914 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.457925 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.457932 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.457939 | orchestrator | } 2026-01-30 02:36:44.457964 | orchestrator | 2026-01-30 02:36:44.457971 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-30 02:36:44.457978 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.457984 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.457996 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.458007 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.459195 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459222 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.459234 | orchestrator | } 2026-01-30 02:36:44.459246 | orchestrator | 2026-01-30 02:36:44.459255 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-30 02:36:44.459262 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.459269 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.459276 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459283 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.459289 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459304 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.459311 | orchestrator | } 2026-01-30 02:36:44.459317 | orchestrator | 2026-01-30 02:36:44.459324 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-30 02:36:44.459331 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.459337 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.459344 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459351 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.459358 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459365 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.459371 | orchestrator | } 2026-01-30 02:36:44.459378 | orchestrator | 2026-01-30 02:36:44.459385 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-30 02:36:44.459392 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-30 02:36:44.459398 | orchestrator | + device = (known after apply) 2026-01-30 02:36:44.459405 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459411 | orchestrator | + instance_id = (known after apply) 2026-01-30 02:36:44.459418 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459425 | orchestrator | + volume_id = (known after apply) 2026-01-30 02:36:44.459431 | orchestrator | } 2026-01-30 02:36:44.459438 | orchestrator | 2026-01-30 02:36:44.459445 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-30 02:36:44.459453 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-30 02:36:44.459460 | orchestrator | + fixed_ip = (known after apply) 2026-01-30 02:36:44.459483 | orchestrator | + floating_ip = (known after apply) 2026-01-30 02:36:44.459494 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459506 | orchestrator | + port_id = (known after apply) 2026-01-30 02:36:44.459516 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459527 | orchestrator | } 2026-01-30 02:36:44.459538 | orchestrator | 2026-01-30 02:36:44.459548 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-30 02:36:44.459559 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-30 02:36:44.459571 | orchestrator | + address = (known after apply) 2026-01-30 02:36:44.459582 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.459593 | orchestrator | + dns_domain = (known after apply) 2026-01-30 02:36:44.459605 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.459612 | orchestrator | + fixed_ip = (known after apply) 2026-01-30 02:36:44.459619 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459626 | orchestrator | + pool = "public" 2026-01-30 02:36:44.459633 | orchestrator | + port_id = (known after apply) 2026-01-30 02:36:44.459639 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459646 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.459653 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.459659 | orchestrator | } 2026-01-30 02:36:44.459666 | orchestrator | 2026-01-30 02:36:44.459673 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-30 02:36:44.459680 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-30 02:36:44.459686 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.459693 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.459700 | orchestrator | + availability_zone_hints = [ 2026-01-30 02:36:44.459707 | orchestrator | + "nova", 2026-01-30 02:36:44.459715 | orchestrator | ] 2026-01-30 02:36:44.459727 | orchestrator | + dns_domain = (known after apply) 2026-01-30 02:36:44.459738 | orchestrator | + external = (known after apply) 2026-01-30 02:36:44.459749 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459757 | orchestrator | + mtu = (known after apply) 2026-01-30 02:36:44.459763 | orchestrator | + name = "net-testbed-management" 2026-01-30 02:36:44.459770 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.459783 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.459790 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459797 | orchestrator | + shared = (known after apply) 2026-01-30 02:36:44.459804 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.459810 | orchestrator | + transparent_vlan = (known after apply) 2026-01-30 02:36:44.459817 | orchestrator | 2026-01-30 02:36:44.459824 | orchestrator | + segments (known after apply) 2026-01-30 02:36:44.459830 | orchestrator | } 2026-01-30 02:36:44.459837 | orchestrator | 2026-01-30 02:36:44.459844 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-30 02:36:44.459851 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-30 02:36:44.459857 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.459864 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.459872 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.459891 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.459903 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.459914 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.459925 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.459936 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.459946 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.459953 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.459959 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.459966 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.459972 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.459979 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.459985 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.459992 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.459999 | orchestrator | 2026-01-30 02:36:44.460005 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460012 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.460019 | orchestrator | } 2026-01-30 02:36:44.460045 | orchestrator | 2026-01-30 02:36:44.460056 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.460063 | orchestrator | 2026-01-30 02:36:44.460069 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.460076 | orchestrator | + ip_address = "192.168.16.5" 2026-01-30 02:36:44.460083 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.460089 | orchestrator | } 2026-01-30 02:36:44.460096 | orchestrator | } 2026-01-30 02:36:44.460103 | orchestrator | 2026-01-30 02:36:44.460109 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-30 02:36:44.460116 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.460123 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.460130 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.460136 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.460143 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.460150 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.460156 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.460163 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.460170 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.460176 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.460183 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.460189 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.460196 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.460202 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.460209 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.460246 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.460259 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.460270 | orchestrator | 2026-01-30 02:36:44.460282 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460292 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.460299 | orchestrator | } 2026-01-30 02:36:44.460305 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460320 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.460327 | orchestrator | } 2026-01-30 02:36:44.460333 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460340 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.460347 | orchestrator | } 2026-01-30 02:36:44.460353 | orchestrator | 2026-01-30 02:36:44.460360 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.460366 | orchestrator | 2026-01-30 02:36:44.460373 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.460380 | orchestrator | + ip_address = "192.168.16.10" 2026-01-30 02:36:44.460387 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.460393 | orchestrator | } 2026-01-30 02:36:44.460400 | orchestrator | } 2026-01-30 02:36:44.460406 | orchestrator | 2026-01-30 02:36:44.460413 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-30 02:36:44.460420 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.460426 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.460433 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.460440 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.460446 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.460453 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.460460 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.460466 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.460473 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.460479 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.460486 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.460493 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.460499 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.460506 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.460512 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.460519 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.460526 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.460532 | orchestrator | 2026-01-30 02:36:44.460539 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460545 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.460552 | orchestrator | } 2026-01-30 02:36:44.460562 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460574 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.460584 | orchestrator | } 2026-01-30 02:36:44.460596 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460607 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.460618 | orchestrator | } 2026-01-30 02:36:44.460630 | orchestrator | 2026-01-30 02:36:44.460637 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.460644 | orchestrator | 2026-01-30 02:36:44.460651 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.460657 | orchestrator | + ip_address = "192.168.16.11" 2026-01-30 02:36:44.460677 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.460684 | orchestrator | } 2026-01-30 02:36:44.460690 | orchestrator | } 2026-01-30 02:36:44.460697 | orchestrator | 2026-01-30 02:36:44.460704 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-30 02:36:44.460710 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.460717 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.460724 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.460730 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.460737 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.460749 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.460756 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.460762 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.460769 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.460781 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.460787 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.460794 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.460801 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.460807 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.460814 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.460820 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.460827 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.460834 | orchestrator | 2026-01-30 02:36:44.460840 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460847 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.460854 | orchestrator | } 2026-01-30 02:36:44.460860 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460867 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.460874 | orchestrator | } 2026-01-30 02:36:44.460880 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.460887 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.460893 | orchestrator | } 2026-01-30 02:36:44.460904 | orchestrator | 2026-01-30 02:36:44.460916 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.460927 | orchestrator | 2026-01-30 02:36:44.460940 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.460951 | orchestrator | + ip_address = "192.168.16.12" 2026-01-30 02:36:44.460963 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.460973 | orchestrator | } 2026-01-30 02:36:44.460980 | orchestrator | } 2026-01-30 02:36:44.460986 | orchestrator | 2026-01-30 02:36:44.460993 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-30 02:36:44.461000 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.461006 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.461013 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.461020 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.461058 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.461066 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.461073 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.461080 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.461086 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.461093 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.461100 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.461106 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.461113 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.461120 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.461126 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.461133 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.461150 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.461157 | orchestrator | 2026-01-30 02:36:44.461170 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461177 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.461184 | orchestrator | } 2026-01-30 02:36:44.461190 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461197 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.461203 | orchestrator | } 2026-01-30 02:36:44.461210 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461217 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.461223 | orchestrator | } 2026-01-30 02:36:44.461230 | orchestrator | 2026-01-30 02:36:44.461245 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.461257 | orchestrator | 2026-01-30 02:36:44.461268 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.461279 | orchestrator | + ip_address = "192.168.16.13" 2026-01-30 02:36:44.461291 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.461302 | orchestrator | } 2026-01-30 02:36:44.461313 | orchestrator | } 2026-01-30 02:36:44.461325 | orchestrator | 2026-01-30 02:36:44.461334 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-30 02:36:44.461341 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.461347 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.461354 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.461361 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.461367 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.461374 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.461380 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.461387 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.461394 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.461400 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.461407 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.461413 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.461420 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.461427 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.461433 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.461440 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.461447 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.461454 | orchestrator | 2026-01-30 02:36:44.461461 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461468 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.461475 | orchestrator | } 2026-01-30 02:36:44.461481 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461488 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.461494 | orchestrator | } 2026-01-30 02:36:44.461501 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461508 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.461514 | orchestrator | } 2026-01-30 02:36:44.461521 | orchestrator | 2026-01-30 02:36:44.461527 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.461534 | orchestrator | 2026-01-30 02:36:44.461541 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.461548 | orchestrator | + ip_address = "192.168.16.14" 2026-01-30 02:36:44.461555 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.461561 | orchestrator | } 2026-01-30 02:36:44.461568 | orchestrator | } 2026-01-30 02:36:44.461575 | orchestrator | 2026-01-30 02:36:44.461581 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-30 02:36:44.461588 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-30 02:36:44.461597 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.461609 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-30 02:36:44.461620 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-30 02:36:44.461631 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.461642 | orchestrator | + device_id = (known after apply) 2026-01-30 02:36:44.461654 | orchestrator | + device_owner = (known after apply) 2026-01-30 02:36:44.461666 | orchestrator | + dns_assignment = (known after apply) 2026-01-30 02:36:44.461673 | orchestrator | + dns_name = (known after apply) 2026-01-30 02:36:44.461679 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.461686 | orchestrator | + mac_address = (known after apply) 2026-01-30 02:36:44.461692 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.461699 | orchestrator | + port_security_enabled = (known after apply) 2026-01-30 02:36:44.461706 | orchestrator | + qos_policy_id = (known after apply) 2026-01-30 02:36:44.461719 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.461726 | orchestrator | + security_group_ids = (known after apply) 2026-01-30 02:36:44.461732 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.461739 | orchestrator | 2026-01-30 02:36:44.461746 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461752 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-30 02:36:44.461759 | orchestrator | } 2026-01-30 02:36:44.461766 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461772 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-30 02:36:44.461779 | orchestrator | } 2026-01-30 02:36:44.461785 | orchestrator | + allowed_address_pairs { 2026-01-30 02:36:44.461792 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-30 02:36:44.461799 | orchestrator | } 2026-01-30 02:36:44.461805 | orchestrator | 2026-01-30 02:36:44.461816 | orchestrator | + binding (known after apply) 2026-01-30 02:36:44.461823 | orchestrator | 2026-01-30 02:36:44.461830 | orchestrator | + fixed_ip { 2026-01-30 02:36:44.461837 | orchestrator | + ip_address = "192.168.16.15" 2026-01-30 02:36:44.461844 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.461850 | orchestrator | } 2026-01-30 02:36:44.461857 | orchestrator | } 2026-01-30 02:36:44.461863 | orchestrator | 2026-01-30 02:36:44.461870 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-30 02:36:44.461877 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-30 02:36:44.461883 | orchestrator | + force_destroy = false 2026-01-30 02:36:44.461890 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.461897 | orchestrator | + port_id = (known after apply) 2026-01-30 02:36:44.461903 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.461910 | orchestrator | + router_id = (known after apply) 2026-01-30 02:36:44.461917 | orchestrator | + subnet_id = (known after apply) 2026-01-30 02:36:44.461923 | orchestrator | } 2026-01-30 02:36:44.461930 | orchestrator | 2026-01-30 02:36:44.461940 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-30 02:36:44.461951 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-30 02:36:44.461962 | orchestrator | + admin_state_up = (known after apply) 2026-01-30 02:36:44.461973 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.461985 | orchestrator | + availability_zone_hints = [ 2026-01-30 02:36:44.461997 | orchestrator | + "nova", 2026-01-30 02:36:44.462008 | orchestrator | ] 2026-01-30 02:36:44.462090 | orchestrator | + distributed = (known after apply) 2026-01-30 02:36:44.462100 | orchestrator | + enable_snat = (known after apply) 2026-01-30 02:36:44.462107 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-30 02:36:44.462121 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-30 02:36:44.462128 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462135 | orchestrator | + name = "testbed" 2026-01-30 02:36:44.462142 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462148 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462155 | orchestrator | 2026-01-30 02:36:44.462162 | orchestrator | + external_fixed_ip (known after apply) 2026-01-30 02:36:44.462168 | orchestrator | } 2026-01-30 02:36:44.462175 | orchestrator | 2026-01-30 02:36:44.462182 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-30 02:36:44.462189 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-30 02:36:44.462196 | orchestrator | + description = "ssh" 2026-01-30 02:36:44.462202 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462209 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462215 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462222 | orchestrator | + port_range_max = 22 2026-01-30 02:36:44.462229 | orchestrator | + port_range_min = 22 2026-01-30 02:36:44.462235 | orchestrator | + protocol = "tcp" 2026-01-30 02:36:44.462242 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462255 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462262 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462269 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.462276 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462288 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462299 | orchestrator | } 2026-01-30 02:36:44.462310 | orchestrator | 2026-01-30 02:36:44.462322 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-30 02:36:44.462333 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-30 02:36:44.462345 | orchestrator | + description = "wireguard" 2026-01-30 02:36:44.462357 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462365 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462371 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462378 | orchestrator | + port_range_max = 51820 2026-01-30 02:36:44.462385 | orchestrator | + port_range_min = 51820 2026-01-30 02:36:44.462392 | orchestrator | + protocol = "udp" 2026-01-30 02:36:44.462398 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462405 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462411 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462418 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.462425 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462431 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462438 | orchestrator | } 2026-01-30 02:36:44.462445 | orchestrator | 2026-01-30 02:36:44.462451 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-30 02:36:44.462458 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-30 02:36:44.462465 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462471 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462478 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462485 | orchestrator | + protocol = "tcp" 2026-01-30 02:36:44.462491 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462498 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462504 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462511 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-30 02:36:44.462518 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462524 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462531 | orchestrator | } 2026-01-30 02:36:44.462537 | orchestrator | 2026-01-30 02:36:44.462544 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-30 02:36:44.462551 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-30 02:36:44.462557 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462564 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462571 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462577 | orchestrator | + protocol = "udp" 2026-01-30 02:36:44.462584 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462590 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462597 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462604 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-30 02:36:44.462610 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462616 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462622 | orchestrator | } 2026-01-30 02:36:44.462633 | orchestrator | 2026-01-30 02:36:44.462643 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-30 02:36:44.462659 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-30 02:36:44.462670 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462680 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462692 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462698 | orchestrator | + protocol = "icmp" 2026-01-30 02:36:44.462705 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462711 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462717 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462723 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.462729 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462735 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462741 | orchestrator | } 2026-01-30 02:36:44.462748 | orchestrator | 2026-01-30 02:36:44.462754 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-30 02:36:44.462765 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-30 02:36:44.462772 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462778 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462784 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462791 | orchestrator | + protocol = "tcp" 2026-01-30 02:36:44.462797 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462803 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462814 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462820 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.462827 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462833 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462839 | orchestrator | } 2026-01-30 02:36:44.462845 | orchestrator | 2026-01-30 02:36:44.462852 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-30 02:36:44.462858 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-30 02:36:44.462864 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462870 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462877 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462883 | orchestrator | + protocol = "udp" 2026-01-30 02:36:44.462889 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.462895 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.462901 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.462908 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.462914 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.462920 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.462926 | orchestrator | } 2026-01-30 02:36:44.462932 | orchestrator | 2026-01-30 02:36:44.462938 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-30 02:36:44.462945 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-30 02:36:44.462951 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.462961 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.462971 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.462981 | orchestrator | + protocol = "icmp" 2026-01-30 02:36:44.462991 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.463002 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.463012 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.463023 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.463047 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.463053 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.463065 | orchestrator | } 2026-01-30 02:36:44.463071 | orchestrator | 2026-01-30 02:36:44.463077 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-30 02:36:44.463084 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-30 02:36:44.463090 | orchestrator | + description = "vrrp" 2026-01-30 02:36:44.463096 | orchestrator | + direction = "ingress" 2026-01-30 02:36:44.463102 | orchestrator | + ethertype = "IPv4" 2026-01-30 02:36:44.463108 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463115 | orchestrator | + protocol = "112" 2026-01-30 02:36:44.463121 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.463127 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-30 02:36:44.463133 | orchestrator | + remote_group_id = (known after apply) 2026-01-30 02:36:44.463139 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-30 02:36:44.463145 | orchestrator | + security_group_id = (known after apply) 2026-01-30 02:36:44.463152 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.463158 | orchestrator | } 2026-01-30 02:36:44.463164 | orchestrator | 2026-01-30 02:36:44.463170 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-30 02:36:44.463177 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-30 02:36:44.463183 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.463189 | orchestrator | + description = "management security group" 2026-01-30 02:36:44.463195 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463201 | orchestrator | + name = "testbed-management" 2026-01-30 02:36:44.463208 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.463214 | orchestrator | + stateful = (known after apply) 2026-01-30 02:36:44.463220 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.463226 | orchestrator | } 2026-01-30 02:36:44.463232 | orchestrator | 2026-01-30 02:36:44.463238 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-30 02:36:44.463245 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-30 02:36:44.463251 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.463257 | orchestrator | + description = "node security group" 2026-01-30 02:36:44.463263 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463269 | orchestrator | + name = "testbed-node" 2026-01-30 02:36:44.463275 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.463281 | orchestrator | + stateful = (known after apply) 2026-01-30 02:36:44.463288 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.463294 | orchestrator | } 2026-01-30 02:36:44.463300 | orchestrator | 2026-01-30 02:36:44.463310 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-30 02:36:44.463321 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-30 02:36:44.463331 | orchestrator | + all_tags = (known after apply) 2026-01-30 02:36:44.463341 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-30 02:36:44.463352 | orchestrator | + dns_nameservers = [ 2026-01-30 02:36:44.463362 | orchestrator | + "8.8.8.8", 2026-01-30 02:36:44.463373 | orchestrator | + "9.9.9.9", 2026-01-30 02:36:44.463385 | orchestrator | ] 2026-01-30 02:36:44.463391 | orchestrator | + enable_dhcp = true 2026-01-30 02:36:44.463398 | orchestrator | + gateway_ip = (known after apply) 2026-01-30 02:36:44.463404 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463410 | orchestrator | + ip_version = 4 2026-01-30 02:36:44.463416 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-30 02:36:44.463422 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-30 02:36:44.463433 | orchestrator | + name = "subnet-testbed-management" 2026-01-30 02:36:44.463439 | orchestrator | + network_id = (known after apply) 2026-01-30 02:36:44.463446 | orchestrator | + no_gateway = false 2026-01-30 02:36:44.463452 | orchestrator | + region = (known after apply) 2026-01-30 02:36:44.463458 | orchestrator | + service_types = (known after apply) 2026-01-30 02:36:44.463473 | orchestrator | + tenant_id = (known after apply) 2026-01-30 02:36:44.463479 | orchestrator | 2026-01-30 02:36:44.463485 | orchestrator | + allocation_pool { 2026-01-30 02:36:44.463491 | orchestrator | + end = "192.168.31.250" 2026-01-30 02:36:44.463498 | orchestrator | + start = "192.168.31.200" 2026-01-30 02:36:44.463504 | orchestrator | } 2026-01-30 02:36:44.463510 | orchestrator | } 2026-01-30 02:36:44.463516 | orchestrator | 2026-01-30 02:36:44.463522 | orchestrator | # terraform_data.image will be created 2026-01-30 02:36:44.463528 | orchestrator | + resource "terraform_data" "image" { 2026-01-30 02:36:44.463535 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463541 | orchestrator | + input = "Ubuntu 24.04" 2026-01-30 02:36:44.463547 | orchestrator | + output = (known after apply) 2026-01-30 02:36:44.463553 | orchestrator | } 2026-01-30 02:36:44.463559 | orchestrator | 2026-01-30 02:36:44.463565 | orchestrator | # terraform_data.image_node will be created 2026-01-30 02:36:44.463572 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-30 02:36:44.463578 | orchestrator | + id = (known after apply) 2026-01-30 02:36:44.463584 | orchestrator | + input = "Ubuntu 24.04" 2026-01-30 02:36:44.463590 | orchestrator | + output = (known after apply) 2026-01-30 02:36:44.463596 | orchestrator | } 2026-01-30 02:36:44.463602 | orchestrator | 2026-01-30 02:36:44.463609 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-30 02:36:44.463615 | orchestrator | 2026-01-30 02:36:44.463621 | orchestrator | Changes to Outputs: 2026-01-30 02:36:44.463627 | orchestrator | + manager_address = (sensitive value) 2026-01-30 02:36:44.463634 | orchestrator | + private_key = (sensitive value) 2026-01-30 02:36:44.683994 | orchestrator | terraform_data.image: Creating... 2026-01-30 02:36:44.684179 | orchestrator | terraform_data.image_node: Creating... 2026-01-30 02:36:44.684196 | orchestrator | terraform_data.image: Creation complete after 0s [id=0eaff07f-678c-1089-efd3-f60548ea4401] 2026-01-30 02:36:44.684204 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=fe02919c-6d38-9404-a054-d685420c896b] 2026-01-30 02:36:44.692489 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-30 02:36:44.698776 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-30 02:36:44.708553 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-30 02:36:44.709181 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-30 02:36:44.715844 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-30 02:36:44.715901 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-30 02:36:44.716390 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-30 02:36:44.721305 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-30 02:36:44.722703 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-30 02:36:44.738658 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-30 02:36:45.211859 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-01-30 02:36:45.222929 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-30 02:36:45.228045 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-30 02:36:45.229446 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-30 02:36:45.283669 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-30 02:36:45.292602 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-30 02:36:46.292163 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=bacdf67e-1158-431d-80ae-bf86375a70b2] 2026-01-30 02:36:46.307993 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-30 02:36:46.910181 | orchestrator | local_file.id_rsa_pub: Creation complete after 1s [id=507c8caf7e998fa81d6f96aed6036c6691286b99] 2026-01-30 02:36:46.920337 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-30 02:36:46.928871 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=bb010108c95ffa58a416e69828b0dad56705f189] 2026-01-30 02:36:46.942226 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-30 02:36:48.330845 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=61a881f5-0027-4515-8019-0b50414c8fea] 2026-01-30 02:36:48.336967 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-30 02:36:48.342490 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=ac342dcc-6378-474e-8bd4-fa421e59d21e] 2026-01-30 02:36:48.348968 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-30 02:36:48.350165 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660] 2026-01-30 02:36:48.355498 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-30 02:36:48.366756 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=f069451a-3954-45d9-86d9-4bd6a8a4900c] 2026-01-30 02:36:48.367591 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=5a64c5df-bd04-40a2-9182-2fad2953f290] 2026-01-30 02:36:48.377990 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-30 02:36:48.379242 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-30 02:36:48.380630 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=b216a188-2311-40bc-9fb1-2473213c5e7c] 2026-01-30 02:36:48.384572 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-30 02:36:48.400965 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=5df04f9b-dd43-4d22-91db-5ca8ef5423a4] 2026-01-30 02:36:48.404906 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-30 02:36:48.417139 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=89867505-ff36-4695-8b18-6c1e230d96db] 2026-01-30 02:36:48.464683 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=6d18679f-3a03-46cd-a085-d473f98711de] 2026-01-30 02:36:49.254562 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=07172178-ecf3-4b1e-a432-4a473eec9919] 2026-01-30 02:36:49.260517 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-30 02:36:50.277469 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=7b944efd-69bd-418c-961b-5e326c11b2a6] 2026-01-30 02:36:51.698114 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=668a7bb6-1d9a-43cc-b5c1-9e85d024a763] 2026-01-30 02:36:51.732344 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=6f62995b-1598-4105-b2bc-5f2a0c02af64] 2026-01-30 02:36:51.754275 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=45889879-29ea-4e0d-a22d-11f14312e02a] 2026-01-30 02:36:51.780097 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=78d852ad-2d79-4944-8416-895694d96844] 2026-01-30 02:36:51.781355 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=288be04e-f5c6-44d1-9ba7-92e7bdbdbceb] 2026-01-30 02:36:51.819044 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=d146c94a-adac-4c27-b0d5-e5e0f56c9da7] 2026-01-30 02:36:52.444935 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=c246764f-5981-461b-a0f2-378821878689] 2026-01-30 02:36:52.451912 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-30 02:36:52.452587 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-30 02:36:52.453334 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-30 02:36:52.634463 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=3821a40b-ff43-4bb9-81c9-e60ff94ce2ef] 2026-01-30 02:36:52.641755 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-30 02:36:52.642768 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-30 02:36:52.644225 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-30 02:36:52.647989 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-30 02:36:52.661558 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-30 02:36:52.661884 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-30 02:36:52.665128 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-30 02:36:52.665402 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-30 02:36:52.831635 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=c53d29bb-0ced-4b0d-9998-89410d05a408] 2026-01-30 02:36:52.847728 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-30 02:36:52.883676 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=f5f57a61-ed7e-4c22-82a4-0daa2dd64b9f] 2026-01-30 02:36:52.895588 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-30 02:36:53.081760 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=93e3b6e6-3bbb-4dee-947a-d694423d2a82] 2026-01-30 02:36:53.089021 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-30 02:36:53.272722 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=85d3f859-aa7b-4f78-b798-e492b6c24ec2] 2026-01-30 02:36:53.279639 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-30 02:36:53.304457 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=4cfb3e23-4295-49d0-b8ef-abf86d4fc4db] 2026-01-30 02:36:53.310364 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-30 02:36:53.533697 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=4165de4c-05ad-4a19-91d1-ffb584830d53] 2026-01-30 02:36:53.542131 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-30 02:36:53.545480 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=aa3c9bc0-76f6-444e-ab56-85bdcc6dcc2b] 2026-01-30 02:36:53.552600 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-30 02:36:53.617218 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=ee55e152-7e5e-40c4-9384-a2ee2e42f4bf] 2026-01-30 02:36:53.628296 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=942c0f1d-6b94-4290-9d42-e962339048e1] 2026-01-30 02:36:53.628540 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-30 02:36:53.703074 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=02616041-a237-47a4-9bf2-2bd6b34ae565] 2026-01-30 02:36:53.778668 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ead416a3-d935-45d6-a836-f8ea2baeb4cf] 2026-01-30 02:36:53.808480 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=4cbde05d-6e33-43b8-9dee-307e12fdd1d6] 2026-01-30 02:36:53.927810 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=6a801d27-44fb-4b3c-b82e-61e06e29fca3] 2026-01-30 02:36:54.094203 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=066b1445-9717-4cef-83c7-cf3f8c543046] 2026-01-30 02:36:54.164567 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=7259f922-5c56-421a-8610-b719e2e20dcf] 2026-01-30 02:36:54.235560 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=76facdec-799e-4c90-9158-f1be3dfe94ae] 2026-01-30 02:36:54.326890 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=625191ab-274f-47bd-b2bd-a50d3b927662] 2026-01-30 02:36:54.723077 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=3b73a667-9a9c-4009-822f-e6827ada0383] 2026-01-30 02:36:54.752014 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-30 02:36:54.758910 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-30 02:36:54.759828 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-30 02:36:54.761781 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-30 02:36:54.769956 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-30 02:36:54.781662 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-30 02:36:54.782548 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-30 02:36:56.772818 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=1a353d6c-3fcc-4322-9954-0c10f0e01188] 2026-01-30 02:36:56.784821 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-30 02:36:56.788911 | orchestrator | local_file.inventory: Creating... 2026-01-30 02:36:56.789426 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-30 02:36:56.794556 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=288208ec366d024343ef3d63e3fc9fead89f9970] 2026-01-30 02:36:56.798740 | orchestrator | local_file.inventory: Creation complete after 0s [id=ff76510ae8301715ee1aa9e0334b691ccdd34934] 2026-01-30 02:36:57.544781 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=1a353d6c-3fcc-4322-9954-0c10f0e01188] 2026-01-30 02:37:04.763927 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-30 02:37:04.767167 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-30 02:37:04.767274 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-30 02:37:04.772688 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-30 02:37:04.783380 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-30 02:37:04.783475 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-30 02:37:14.770172 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-30 02:37:14.770268 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-30 02:37:14.770311 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-30 02:37:14.773155 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-30 02:37:14.784539 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-30 02:37:14.784674 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-30 02:37:15.177338 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=8565da13-5038-46f9-8e77-bb9ba983c9b9] 2026-01-30 02:37:15.449302 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=62a9f95f-1378-4fc6-b994-dcc882213d4c] 2026-01-30 02:37:15.743177 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=836f7334-5453-48ba-8f3b-77f04830503f] 2026-01-30 02:37:24.770632 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-30 02:37:24.770852 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-30 02:37:24.785023 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-30 02:37:25.930379 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=54f3b4c1-9797-4593-be96-ee3f7b106f4f] 2026-01-30 02:37:25.969737 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=106430ce-52e2-40e9-9a65-4398cd0f16a0] 2026-01-30 02:37:26.001155 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=6c42be1b-01fc-4215-b2f5-80725b3e8b1b] 2026-01-30 02:37:26.032817 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-30 02:37:26.035609 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3836292943416575817] 2026-01-30 02:37:26.039918 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-30 02:37:26.040021 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-30 02:37:26.048942 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-30 02:37:26.052481 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-30 02:37:26.058459 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-30 02:37:26.061807 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-30 02:37:26.064996 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-30 02:37:26.065262 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-30 02:37:26.085320 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-30 02:37:26.090666 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-30 02:37:29.438583 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=106430ce-52e2-40e9-9a65-4398cd0f16a0/5a64c5df-bd04-40a2-9182-2fad2953f290] 2026-01-30 02:37:29.449895 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=8565da13-5038-46f9-8e77-bb9ba983c9b9/89867505-ff36-4695-8b18-6c1e230d96db] 2026-01-30 02:37:29.485314 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=836f7334-5453-48ba-8f3b-77f04830503f/b216a188-2311-40bc-9fb1-2473213c5e7c] 2026-01-30 02:37:29.486522 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=106430ce-52e2-40e9-9a65-4398cd0f16a0/2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660] 2026-01-30 02:37:29.500065 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=8565da13-5038-46f9-8e77-bb9ba983c9b9/f069451a-3954-45d9-86d9-4bd6a8a4900c] 2026-01-30 02:37:29.526830 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=836f7334-5453-48ba-8f3b-77f04830503f/5df04f9b-dd43-4d22-91db-5ca8ef5423a4] 2026-01-30 02:37:35.569634 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=106430ce-52e2-40e9-9a65-4398cd0f16a0/6d18679f-3a03-46cd-a085-d473f98711de] 2026-01-30 02:37:35.596604 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=8565da13-5038-46f9-8e77-bb9ba983c9b9/ac342dcc-6378-474e-8bd4-fa421e59d21e] 2026-01-30 02:37:35.634950 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=836f7334-5453-48ba-8f3b-77f04830503f/61a881f5-0027-4515-8019-0b50414c8fea] 2026-01-30 02:37:36.091404 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-30 02:37:46.092373 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-30 02:37:46.392661 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f797462f-d65f-493c-b15b-58006f5e78b5] 2026-01-30 02:37:46.410429 | orchestrator | 2026-01-30 02:37:46.410494 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-30 02:37:46.410529 | orchestrator | 2026-01-30 02:37:46.410538 | orchestrator | Outputs: 2026-01-30 02:37:46.410545 | orchestrator | 2026-01-30 02:37:46.410571 | orchestrator | manager_address = 2026-01-30 02:37:46.410579 | orchestrator | private_key = 2026-01-30 02:37:46.688595 | orchestrator | ok: Runtime: 0:01:09.128181 2026-01-30 02:37:46.721727 | 2026-01-30 02:37:46.721855 | TASK [Fetch manager address] 2026-01-30 02:37:47.196403 | orchestrator | ok 2026-01-30 02:37:47.203715 | 2026-01-30 02:37:47.203822 | TASK [Set manager_host address] 2026-01-30 02:37:47.273810 | orchestrator | ok 2026-01-30 02:37:47.282719 | 2026-01-30 02:37:47.282855 | LOOP [Update ansible collections] 2026-01-30 02:37:48.737938 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-30 02:37:48.738280 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-30 02:37:48.738336 | orchestrator | Starting galaxy collection install process 2026-01-30 02:37:48.738377 | orchestrator | Process install dependency map 2026-01-30 02:37:48.738413 | orchestrator | Starting collection install process 2026-01-30 02:37:48.738447 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2026-01-30 02:37:48.738483 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2026-01-30 02:37:48.738522 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-30 02:37:48.738594 | orchestrator | ok: Item: commons Runtime: 0:00:01.132599 2026-01-30 02:37:49.664493 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-30 02:37:49.664621 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-30 02:37:49.664652 | orchestrator | Starting galaxy collection install process 2026-01-30 02:37:49.664676 | orchestrator | Process install dependency map 2026-01-30 02:37:49.664697 | orchestrator | Starting collection install process 2026-01-30 02:37:49.664719 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2026-01-30 02:37:49.664740 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2026-01-30 02:37:49.664760 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-30 02:37:49.664791 | orchestrator | ok: Item: services Runtime: 0:00:00.669911 2026-01-30 02:37:49.673717 | 2026-01-30 02:37:49.673832 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-30 02:38:00.195693 | orchestrator | ok 2026-01-30 02:38:00.205575 | 2026-01-30 02:38:00.205695 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-30 02:39:00.246976 | orchestrator | ok 2026-01-30 02:39:00.256726 | 2026-01-30 02:39:00.256844 | TASK [Fetch manager ssh hostkey] 2026-01-30 02:39:01.832710 | orchestrator | Output suppressed because no_log was given 2026-01-30 02:39:01.849234 | 2026-01-30 02:39:01.849440 | TASK [Get ssh keypair from terraform environment] 2026-01-30 02:39:02.389813 | orchestrator | ok: Runtime: 0:00:00.011097 2026-01-30 02:39:02.405947 | 2026-01-30 02:39:02.406186 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-30 02:39:02.453601 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-30 02:39:02.463788 | 2026-01-30 02:39:02.463920 | TASK [Run manager part 0] 2026-01-30 02:39:03.359629 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-30 02:39:03.403733 | orchestrator | 2026-01-30 02:39:03.403772 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-30 02:39:03.403778 | orchestrator | 2026-01-30 02:39:03.403790 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-30 02:39:04.975007 | orchestrator | ok: [testbed-manager] 2026-01-30 02:39:04.975076 | orchestrator | 2026-01-30 02:39:04.975101 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-30 02:39:04.975110 | orchestrator | 2026-01-30 02:39:04.975119 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:39:06.741422 | orchestrator | ok: [testbed-manager] 2026-01-30 02:39:06.741485 | orchestrator | 2026-01-30 02:39:06.741496 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-30 02:39:07.413782 | orchestrator | ok: [testbed-manager] 2026-01-30 02:39:07.413844 | orchestrator | 2026-01-30 02:39:07.413852 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-30 02:39:07.458748 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.458811 | orchestrator | 2026-01-30 02:39:07.458824 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-30 02:39:07.490761 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.490818 | orchestrator | 2026-01-30 02:39:07.490827 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-30 02:39:07.515315 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.515375 | orchestrator | 2026-01-30 02:39:07.515385 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-30 02:39:07.540057 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.540170 | orchestrator | 2026-01-30 02:39:07.540194 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-30 02:39:07.574065 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.574112 | orchestrator | 2026-01-30 02:39:07.574119 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-30 02:39:07.612012 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.612094 | orchestrator | 2026-01-30 02:39:07.612108 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-30 02:39:07.646011 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:39:07.646171 | orchestrator | 2026-01-30 02:39:07.646196 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-30 02:39:08.321760 | orchestrator | changed: [testbed-manager] 2026-01-30 02:39:08.321817 | orchestrator | 2026-01-30 02:39:08.321826 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-30 02:41:30.925300 | orchestrator | changed: [testbed-manager] 2026-01-30 02:41:30.925416 | orchestrator | 2026-01-30 02:41:30.925434 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-30 02:43:06.719567 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:06.719678 | orchestrator | 2026-01-30 02:43:06.719694 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-30 02:43:26.438072 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:26.438170 | orchestrator | 2026-01-30 02:43:26.438234 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-30 02:43:34.503981 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:34.504079 | orchestrator | 2026-01-30 02:43:34.504095 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-30 02:43:34.555272 | orchestrator | ok: [testbed-manager] 2026-01-30 02:43:34.555361 | orchestrator | 2026-01-30 02:43:34.555413 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-30 02:43:35.287178 | orchestrator | ok: [testbed-manager] 2026-01-30 02:43:35.287288 | orchestrator | 2026-01-30 02:43:35.287302 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-30 02:43:35.963357 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:35.963452 | orchestrator | 2026-01-30 02:43:35.963463 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-30 02:43:42.004190 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:42.004236 | orchestrator | 2026-01-30 02:43:42.004259 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-30 02:43:47.365266 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:47.365317 | orchestrator | 2026-01-30 02:43:47.365330 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-30 02:43:49.672411 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:49.672453 | orchestrator | 2026-01-30 02:43:49.672462 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-30 02:43:51.233025 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:51.233069 | orchestrator | 2026-01-30 02:43:51.233078 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-30 02:43:52.276485 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-30 02:43:52.276587 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-30 02:43:52.276602 | orchestrator | 2026-01-30 02:43:52.276615 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-30 02:43:52.319126 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-30 02:43:52.319220 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-30 02:43:52.319244 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-30 02:43:52.319264 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-30 02:43:56.389956 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-30 02:43:56.389999 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-30 02:43:56.390007 | orchestrator | 2026-01-30 02:43:56.390015 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-30 02:43:56.975939 | orchestrator | changed: [testbed-manager] 2026-01-30 02:43:56.976052 | orchestrator | 2026-01-30 02:43:56.976068 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-30 02:45:16.486943 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-30 02:45:16.486997 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-30 02:45:16.487006 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-30 02:45:16.487013 | orchestrator | 2026-01-30 02:45:16.487019 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-30 02:45:18.748879 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-30 02:45:18.748982 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-30 02:45:18.748998 | orchestrator | 2026-01-30 02:45:18.749011 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-30 02:45:18.749023 | orchestrator | 2026-01-30 02:45:18.749034 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:45:20.169032 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:20.169147 | orchestrator | 2026-01-30 02:45:20.169179 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-30 02:45:20.211172 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:20.211255 | orchestrator | 2026-01-30 02:45:20.211271 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-30 02:45:20.283741 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:20.283817 | orchestrator | 2026-01-30 02:45:20.283831 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-30 02:45:21.049973 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:21.050255 | orchestrator | 2026-01-30 02:45:21.050286 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-30 02:45:21.763501 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:21.763660 | orchestrator | 2026-01-30 02:45:21.763680 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-30 02:45:23.118187 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-30 02:45:23.118239 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-30 02:45:23.118250 | orchestrator | 2026-01-30 02:45:23.118272 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-30 02:45:24.487775 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:24.487872 | orchestrator | 2026-01-30 02:45:24.487882 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-30 02:45:26.214750 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:45:26.214793 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-30 02:45:26.214801 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:45:26.214808 | orchestrator | 2026-01-30 02:45:26.214815 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-30 02:45:26.276699 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:26.276751 | orchestrator | 2026-01-30 02:45:26.276762 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-30 02:45:26.341854 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:26.341891 | orchestrator | 2026-01-30 02:45:26.341899 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-30 02:45:26.890724 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:26.890817 | orchestrator | 2026-01-30 02:45:26.890833 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-30 02:45:26.961022 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:26.961098 | orchestrator | 2026-01-30 02:45:26.961110 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-30 02:45:27.827293 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 02:45:27.827336 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:27.827345 | orchestrator | 2026-01-30 02:45:27.827352 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-30 02:45:27.861332 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:27.861370 | orchestrator | 2026-01-30 02:45:27.861378 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-30 02:45:27.893731 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:27.893766 | orchestrator | 2026-01-30 02:45:27.893773 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-30 02:45:27.933361 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:27.933403 | orchestrator | 2026-01-30 02:45:27.933414 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-30 02:45:28.003382 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:28.003422 | orchestrator | 2026-01-30 02:45:28.003429 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-30 02:45:28.703545 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:28.703634 | orchestrator | 2026-01-30 02:45:28.703640 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-30 02:45:28.703645 | orchestrator | 2026-01-30 02:45:28.703649 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:45:30.050940 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:30.050996 | orchestrator | 2026-01-30 02:45:30.051003 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-30 02:45:30.994291 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:30.994364 | orchestrator | 2026-01-30 02:45:30.994378 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:45:30.994391 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-30 02:45:30.994400 | orchestrator | 2026-01-30 02:45:31.246790 | orchestrator | ok: Runtime: 0:06:28.335456 2026-01-30 02:45:31.263151 | 2026-01-30 02:45:31.263290 | TASK [Point out that the log in on the manager is now possible] 2026-01-30 02:45:31.301527 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-30 02:45:31.310579 | 2026-01-30 02:45:31.310704 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-30 02:45:31.343605 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-30 02:45:31.351200 | 2026-01-30 02:45:31.351307 | TASK [Run manager part 1 + 2] 2026-01-30 02:45:32.195484 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-30 02:45:32.253028 | orchestrator | 2026-01-30 02:45:32.253078 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-30 02:45:32.253085 | orchestrator | 2026-01-30 02:45:32.253098 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:45:35.118105 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:35.118161 | orchestrator | 2026-01-30 02:45:35.118181 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-30 02:45:35.159205 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:35.159259 | orchestrator | 2026-01-30 02:45:35.159269 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-30 02:45:35.209073 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:35.209143 | orchestrator | 2026-01-30 02:45:35.209154 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-30 02:45:35.259273 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:35.259356 | orchestrator | 2026-01-30 02:45:35.259378 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-30 02:45:35.343515 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:35.343628 | orchestrator | 2026-01-30 02:45:35.343648 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-30 02:45:35.409215 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:35.409325 | orchestrator | 2026-01-30 02:45:35.409352 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-30 02:45:35.459279 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-30 02:45:35.459385 | orchestrator | 2026-01-30 02:45:35.459400 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-30 02:45:36.190758 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:36.190851 | orchestrator | 2026-01-30 02:45:36.190869 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-30 02:45:36.235949 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:36.236035 | orchestrator | 2026-01-30 02:45:36.236050 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-30 02:45:37.623849 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:37.623965 | orchestrator | 2026-01-30 02:45:37.623985 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-30 02:45:38.216848 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:38.216931 | orchestrator | 2026-01-30 02:45:38.216951 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-30 02:45:39.328634 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:39.328731 | orchestrator | 2026-01-30 02:45:39.328750 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-30 02:45:54.623066 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:54.623187 | orchestrator | 2026-01-30 02:45:54.623215 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-30 02:45:55.329601 | orchestrator | ok: [testbed-manager] 2026-01-30 02:45:55.329774 | orchestrator | 2026-01-30 02:45:55.329799 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-30 02:45:55.390091 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:45:55.390188 | orchestrator | 2026-01-30 02:45:55.390210 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-30 02:45:56.373458 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:56.373570 | orchestrator | 2026-01-30 02:45:56.373586 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-30 02:45:57.318892 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:57.318990 | orchestrator | 2026-01-30 02:45:57.319006 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-30 02:45:57.880807 | orchestrator | changed: [testbed-manager] 2026-01-30 02:45:57.880852 | orchestrator | 2026-01-30 02:45:57.880860 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-30 02:45:57.937473 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-30 02:45:57.937590 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-30 02:45:57.937608 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-30 02:45:57.937686 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-30 02:46:00.115971 | orchestrator | changed: [testbed-manager] 2026-01-30 02:46:00.116080 | orchestrator | 2026-01-30 02:46:00.116100 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-30 02:46:08.856221 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-30 02:46:08.856262 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-30 02:46:08.856272 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-30 02:46:08.856278 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-30 02:46:08.856288 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-30 02:46:08.856294 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-30 02:46:08.856300 | orchestrator | 2026-01-30 02:46:08.856306 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-30 02:46:09.907639 | orchestrator | changed: [testbed-manager] 2026-01-30 02:46:09.907716 | orchestrator | 2026-01-30 02:46:09.907728 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-30 02:46:09.950211 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:46:09.950311 | orchestrator | 2026-01-30 02:46:09.950333 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-30 02:46:12.801777 | orchestrator | changed: [testbed-manager] 2026-01-30 02:46:12.801897 | orchestrator | 2026-01-30 02:46:12.801924 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-30 02:46:12.844320 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:46:12.844407 | orchestrator | 2026-01-30 02:46:12.844419 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-30 02:47:44.883733 | orchestrator | changed: [testbed-manager] 2026-01-30 02:47:44.883899 | orchestrator | 2026-01-30 02:47:44.883922 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-30 02:47:45.983424 | orchestrator | ok: [testbed-manager] 2026-01-30 02:47:45.983520 | orchestrator | 2026-01-30 02:47:45.983540 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:47:45.983554 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-30 02:47:45.983567 | orchestrator | 2026-01-30 02:47:46.472617 | orchestrator | ok: Runtime: 0:02:14.443126 2026-01-30 02:47:46.490048 | 2026-01-30 02:47:46.490191 | TASK [Reboot manager] 2026-01-30 02:47:48.044182 | orchestrator | ok: Runtime: 0:00:00.933915 2026-01-30 02:47:48.061700 | 2026-01-30 02:47:48.061856 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-30 02:48:02.340564 | orchestrator | ok 2026-01-30 02:48:02.349613 | 2026-01-30 02:48:02.349723 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-30 02:49:02.395078 | orchestrator | ok 2026-01-30 02:49:02.404460 | 2026-01-30 02:49:02.404590 | TASK [Deploy manager + bootstrap nodes] 2026-01-30 02:49:04.830262 | orchestrator | 2026-01-30 02:49:04.830455 | orchestrator | # DEPLOY MANAGER 2026-01-30 02:49:04.830481 | orchestrator | 2026-01-30 02:49:04.830496 | orchestrator | + set -e 2026-01-30 02:49:04.830510 | orchestrator | + echo 2026-01-30 02:49:04.830524 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-30 02:49:04.830541 | orchestrator | + echo 2026-01-30 02:49:04.830593 | orchestrator | + cat /opt/manager-vars.sh 2026-01-30 02:49:04.833583 | orchestrator | export NUMBER_OF_NODES=6 2026-01-30 02:49:04.833642 | orchestrator | 2026-01-30 02:49:04.833657 | orchestrator | export CEPH_VERSION=reef 2026-01-30 02:49:04.833670 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-30 02:49:04.833683 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-30 02:49:04.833708 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-30 02:49:04.833719 | orchestrator | 2026-01-30 02:49:04.833738 | orchestrator | export ARA=false 2026-01-30 02:49:04.833750 | orchestrator | export DEPLOY_MODE=manager 2026-01-30 02:49:04.833772 | orchestrator | export TEMPEST=false 2026-01-30 02:49:04.833792 | orchestrator | export IS_ZUUL=true 2026-01-30 02:49:04.833811 | orchestrator | 2026-01-30 02:49:04.833840 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 02:49:04.833859 | orchestrator | export EXTERNAL_API=false 2026-01-30 02:49:04.833879 | orchestrator | 2026-01-30 02:49:04.833897 | orchestrator | export IMAGE_USER=ubuntu 2026-01-30 02:49:04.833919 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-30 02:49:04.833939 | orchestrator | 2026-01-30 02:49:04.833957 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-30 02:49:04.834078 | orchestrator | 2026-01-30 02:49:04.834105 | orchestrator | + echo 2026-01-30 02:49:04.834128 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 02:49:04.834423 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 02:49:04.834453 | orchestrator | ++ INTERACTIVE=false 2026-01-30 02:49:04.834466 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 02:49:04.834507 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 02:49:04.834560 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 02:49:04.834573 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 02:49:04.834585 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 02:49:04.834843 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 02:49:04.834871 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 02:49:04.834890 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 02:49:04.834909 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 02:49:04.834928 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 02:49:04.834949 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 02:49:04.834969 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 02:49:04.835026 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 02:49:04.835046 | orchestrator | ++ export ARA=false 2026-01-30 02:49:04.835066 | orchestrator | ++ ARA=false 2026-01-30 02:49:04.835079 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 02:49:04.835090 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 02:49:04.835104 | orchestrator | ++ export TEMPEST=false 2026-01-30 02:49:04.835123 | orchestrator | ++ TEMPEST=false 2026-01-30 02:49:04.835140 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 02:49:04.835158 | orchestrator | ++ IS_ZUUL=true 2026-01-30 02:49:04.835176 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 02:49:04.835193 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 02:49:04.835210 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 02:49:04.835227 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 02:49:04.835244 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 02:49:04.835262 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 02:49:04.835280 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 02:49:04.835298 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 02:49:04.835315 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 02:49:04.835331 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 02:49:04.835348 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-30 02:49:04.886621 | orchestrator | + docker version 2026-01-30 02:49:05.136653 | orchestrator | Client: Docker Engine - Community 2026-01-30 02:49:05.136764 | orchestrator | Version: 27.5.1 2026-01-30 02:49:05.136784 | orchestrator | API version: 1.47 2026-01-30 02:49:05.136796 | orchestrator | Go version: go1.22.11 2026-01-30 02:49:05.136806 | orchestrator | Git commit: 9f9e405 2026-01-30 02:49:05.136818 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-30 02:49:05.136830 | orchestrator | OS/Arch: linux/amd64 2026-01-30 02:49:05.136841 | orchestrator | Context: default 2026-01-30 02:49:05.136851 | orchestrator | 2026-01-30 02:49:05.136863 | orchestrator | Server: Docker Engine - Community 2026-01-30 02:49:05.136874 | orchestrator | Engine: 2026-01-30 02:49:05.136886 | orchestrator | Version: 27.5.1 2026-01-30 02:49:05.136897 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-30 02:49:05.136938 | orchestrator | Go version: go1.22.11 2026-01-30 02:49:05.136950 | orchestrator | Git commit: 4c9b3b0 2026-01-30 02:49:05.136961 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-30 02:49:05.136972 | orchestrator | OS/Arch: linux/amd64 2026-01-30 02:49:05.137014 | orchestrator | Experimental: false 2026-01-30 02:49:05.137025 | orchestrator | containerd: 2026-01-30 02:49:05.137037 | orchestrator | Version: v2.2.1 2026-01-30 02:49:05.137048 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-30 02:49:05.137059 | orchestrator | runc: 2026-01-30 02:49:05.137070 | orchestrator | Version: 1.3.4 2026-01-30 02:49:05.137081 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-30 02:49:05.137092 | orchestrator | docker-init: 2026-01-30 02:49:05.137103 | orchestrator | Version: 0.19.0 2026-01-30 02:49:05.137116 | orchestrator | GitCommit: de40ad0 2026-01-30 02:49:05.140541 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-30 02:49:05.148652 | orchestrator | + set -e 2026-01-30 02:49:05.148705 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 02:49:05.148724 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 02:49:05.148743 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 02:49:05.148771 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 02:49:05.148786 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 02:49:05.148797 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 02:49:05.148809 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 02:49:05.148820 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 02:49:05.148831 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 02:49:05.148842 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 02:49:05.148852 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 02:49:05.148863 | orchestrator | ++ export ARA=false 2026-01-30 02:49:05.148875 | orchestrator | ++ ARA=false 2026-01-30 02:49:05.148885 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 02:49:05.148933 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 02:49:05.148946 | orchestrator | ++ export TEMPEST=false 2026-01-30 02:49:05.148957 | orchestrator | ++ TEMPEST=false 2026-01-30 02:49:05.148967 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 02:49:05.148978 | orchestrator | ++ IS_ZUUL=true 2026-01-30 02:49:05.149032 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 02:49:05.149043 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 02:49:05.149055 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 02:49:05.149066 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 02:49:05.149077 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 02:49:05.149087 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 02:49:05.149099 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 02:49:05.149110 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 02:49:05.149121 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 02:49:05.149132 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 02:49:05.149142 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 02:49:05.149153 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 02:49:05.149164 | orchestrator | ++ INTERACTIVE=false 2026-01-30 02:49:05.149175 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 02:49:05.149191 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 02:49:05.149207 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-30 02:49:05.149218 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-30 02:49:05.156479 | orchestrator | + set -e 2026-01-30 02:49:05.156549 | orchestrator | + VERSION=9.5.0 2026-01-30 02:49:05.156563 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-30 02:49:05.163644 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-30 02:49:05.163693 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-30 02:49:05.168394 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-30 02:49:05.173004 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-30 02:49:05.181399 | orchestrator | /opt/configuration ~ 2026-01-30 02:49:05.181459 | orchestrator | + set -e 2026-01-30 02:49:05.181471 | orchestrator | + pushd /opt/configuration 2026-01-30 02:49:05.181482 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 02:49:05.184445 | orchestrator | + source /opt/venv/bin/activate 2026-01-30 02:49:05.185264 | orchestrator | ++ deactivate nondestructive 2026-01-30 02:49:05.185300 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:05.185316 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:05.185358 | orchestrator | ++ hash -r 2026-01-30 02:49:05.185375 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:05.185387 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-30 02:49:05.185398 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-30 02:49:05.185409 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-30 02:49:05.185677 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-30 02:49:05.185707 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-30 02:49:05.185718 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-30 02:49:05.185777 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-30 02:49:05.185789 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:49:05.185811 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:49:05.185822 | orchestrator | ++ export PATH 2026-01-30 02:49:05.185839 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:05.185850 | orchestrator | ++ '[' -z '' ']' 2026-01-30 02:49:05.185861 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-30 02:49:05.185872 | orchestrator | ++ PS1='(venv) ' 2026-01-30 02:49:05.185882 | orchestrator | ++ export PS1 2026-01-30 02:49:05.185894 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-30 02:49:05.185905 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-30 02:49:05.185919 | orchestrator | ++ hash -r 2026-01-30 02:49:05.186168 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-30 02:49:06.099888 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-30 02:49:06.100537 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-30 02:49:06.101924 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-30 02:49:06.103259 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-30 02:49:06.104419 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-01-30 02:49:06.114075 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-30 02:49:06.115392 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-30 02:49:06.116394 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-30 02:49:06.117680 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-30 02:49:06.144973 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-30 02:49:06.146288 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-30 02:49:06.147968 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-01-30 02:49:06.149358 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-30 02:49:06.153190 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-30 02:49:06.340908 | orchestrator | ++ which gilt 2026-01-30 02:49:06.343267 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-30 02:49:06.343289 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-30 02:49:06.568649 | orchestrator | osism.cfg-generics: 2026-01-30 02:49:06.709106 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-30 02:49:06.709847 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-30 02:49:06.710651 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-30 02:49:06.710728 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-30 02:49:07.387751 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-30 02:49:07.398917 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-30 02:49:07.689110 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-30 02:49:07.740854 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 02:49:07.740958 | orchestrator | + deactivate 2026-01-30 02:49:07.740973 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-30 02:49:07.741013 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:49:07.741025 | orchestrator | + export PATH 2026-01-30 02:49:07.741037 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-30 02:49:07.741048 | orchestrator | + '[' -n '' ']' 2026-01-30 02:49:07.741062 | orchestrator | + hash -r 2026-01-30 02:49:07.741073 | orchestrator | + '[' -n '' ']' 2026-01-30 02:49:07.741083 | orchestrator | + unset VIRTUAL_ENV 2026-01-30 02:49:07.741094 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-30 02:49:07.741105 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-30 02:49:07.741116 | orchestrator | + unset -f deactivate 2026-01-30 02:49:07.741140 | orchestrator | ~ 2026-01-30 02:49:07.741152 | orchestrator | + popd 2026-01-30 02:49:07.742607 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-30 02:49:07.742635 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-30 02:49:07.743230 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-30 02:49:07.790916 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 02:49:07.791100 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-30 02:49:07.791141 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-30 02:49:07.845584 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 02:49:07.846051 | orchestrator | ++ semver 2024.2 2025.1 2026-01-30 02:49:07.895093 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 02:49:07.895206 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-30 02:49:07.982133 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 02:49:07.982236 | orchestrator | + source /opt/venv/bin/activate 2026-01-30 02:49:07.982259 | orchestrator | ++ deactivate nondestructive 2026-01-30 02:49:07.982271 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:07.982282 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:07.982293 | orchestrator | ++ hash -r 2026-01-30 02:49:07.982304 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:07.982315 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-30 02:49:07.982326 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-30 02:49:07.982337 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-30 02:49:07.982349 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-30 02:49:07.982360 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-30 02:49:07.982372 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-30 02:49:07.982383 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-30 02:49:07.982394 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:49:07.982427 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:49:07.982439 | orchestrator | ++ export PATH 2026-01-30 02:49:07.982464 | orchestrator | ++ '[' -n '' ']' 2026-01-30 02:49:07.982476 | orchestrator | ++ '[' -z '' ']' 2026-01-30 02:49:07.982487 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-30 02:49:07.982497 | orchestrator | ++ PS1='(venv) ' 2026-01-30 02:49:07.982508 | orchestrator | ++ export PS1 2026-01-30 02:49:07.982519 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-30 02:49:07.982530 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-30 02:49:07.982540 | orchestrator | ++ hash -r 2026-01-30 02:49:07.982552 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-30 02:49:08.985454 | orchestrator | 2026-01-30 02:49:08.985577 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-30 02:49:08.985595 | orchestrator | 2026-01-30 02:49:08.985607 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-30 02:49:09.529460 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:09.529572 | orchestrator | 2026-01-30 02:49:09.529595 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-30 02:49:10.472039 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:10.472199 | orchestrator | 2026-01-30 02:49:10.472217 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-30 02:49:10.472257 | orchestrator | 2026-01-30 02:49:10.472269 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:49:12.657451 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:12.657554 | orchestrator | 2026-01-30 02:49:12.657569 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-30 02:49:12.711095 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:12.711187 | orchestrator | 2026-01-30 02:49:12.711200 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-30 02:49:13.157257 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:13.157351 | orchestrator | 2026-01-30 02:49:13.157363 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-30 02:49:13.201082 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:13.201153 | orchestrator | 2026-01-30 02:49:13.201160 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-30 02:49:13.528427 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:13.528526 | orchestrator | 2026-01-30 02:49:13.528541 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-30 02:49:13.579503 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:13.579600 | orchestrator | 2026-01-30 02:49:13.579614 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-30 02:49:13.902325 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:13.902428 | orchestrator | 2026-01-30 02:49:13.902443 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-30 02:49:14.024571 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:14.024647 | orchestrator | 2026-01-30 02:49:14.024657 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-30 02:49:14.024665 | orchestrator | 2026-01-30 02:49:14.024672 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:49:15.664734 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:15.664840 | orchestrator | 2026-01-30 02:49:15.664857 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-30 02:49:15.769206 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-30 02:49:15.769306 | orchestrator | 2026-01-30 02:49:15.769322 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-30 02:49:15.815782 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-30 02:49:15.815877 | orchestrator | 2026-01-30 02:49:15.815891 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-30 02:49:16.838329 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-30 02:49:16.838434 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-30 02:49:16.838448 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-30 02:49:16.838456 | orchestrator | 2026-01-30 02:49:16.838465 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-30 02:49:18.545344 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-30 02:49:18.545455 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-30 02:49:18.545469 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-30 02:49:18.545480 | orchestrator | 2026-01-30 02:49:18.545490 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-30 02:49:19.166602 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 02:49:19.166708 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:19.166726 | orchestrator | 2026-01-30 02:49:19.166739 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-30 02:49:19.781504 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 02:49:19.781609 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:19.781628 | orchestrator | 2026-01-30 02:49:19.781641 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-30 02:49:19.825286 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:19.825385 | orchestrator | 2026-01-30 02:49:19.825400 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-30 02:49:20.172415 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:20.172516 | orchestrator | 2026-01-30 02:49:20.172533 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-30 02:49:20.251306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-30 02:49:20.251384 | orchestrator | 2026-01-30 02:49:20.251391 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-30 02:49:21.262989 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:21.263145 | orchestrator | 2026-01-30 02:49:21.263162 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-30 02:49:22.012091 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:22.012218 | orchestrator | 2026-01-30 02:49:22.012235 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-30 02:49:32.756737 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:32.756851 | orchestrator | 2026-01-30 02:49:32.756890 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-30 02:49:32.799355 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:32.799449 | orchestrator | 2026-01-30 02:49:32.799463 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-30 02:49:32.799476 | orchestrator | 2026-01-30 02:49:32.799487 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:49:35.554633 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:35.554702 | orchestrator | 2026-01-30 02:49:35.554709 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-30 02:49:35.666571 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-30 02:49:35.666675 | orchestrator | 2026-01-30 02:49:35.666701 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-30 02:49:35.722937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 02:49:35.723023 | orchestrator | 2026-01-30 02:49:35.723036 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-30 02:49:38.174299 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:38.174411 | orchestrator | 2026-01-30 02:49:38.174427 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-30 02:49:38.223368 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:38.223474 | orchestrator | 2026-01-30 02:49:38.223490 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-30 02:49:38.344686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-30 02:49:38.344809 | orchestrator | 2026-01-30 02:49:38.344836 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-30 02:49:41.136811 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-30 02:49:41.136899 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-30 02:49:41.136909 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-30 02:49:41.136916 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-30 02:49:41.136923 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-30 02:49:41.136929 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-30 02:49:41.136935 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-30 02:49:41.136942 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-30 02:49:41.136948 | orchestrator | 2026-01-30 02:49:41.136957 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-30 02:49:41.726400 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:41.726527 | orchestrator | 2026-01-30 02:49:41.726545 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-30 02:49:42.341370 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:42.341475 | orchestrator | 2026-01-30 02:49:42.341491 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-30 02:49:42.403665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-30 02:49:42.403802 | orchestrator | 2026-01-30 02:49:42.403820 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-30 02:49:43.571362 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-30 02:49:43.571489 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-30 02:49:43.571507 | orchestrator | 2026-01-30 02:49:43.571520 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-30 02:49:44.169977 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:44.170217 | orchestrator | 2026-01-30 02:49:44.170237 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-30 02:49:44.223685 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:44.223777 | orchestrator | 2026-01-30 02:49:44.223790 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-30 02:49:44.300725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-30 02:49:44.300832 | orchestrator | 2026-01-30 02:49:44.300849 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-30 02:49:44.900496 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:44.900600 | orchestrator | 2026-01-30 02:49:44.900612 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-30 02:49:44.966377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-30 02:49:44.966470 | orchestrator | 2026-01-30 02:49:44.966483 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-30 02:49:46.293699 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 02:49:46.293825 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 02:49:46.293840 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:46.293851 | orchestrator | 2026-01-30 02:49:46.293861 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-30 02:49:46.885175 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:46.885298 | orchestrator | 2026-01-30 02:49:46.885316 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-30 02:49:46.938623 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:46.938716 | orchestrator | 2026-01-30 02:49:46.938753 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-30 02:49:47.049031 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-30 02:49:47.049180 | orchestrator | 2026-01-30 02:49:47.049206 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-30 02:49:47.572143 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:47.572251 | orchestrator | 2026-01-30 02:49:47.572268 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-30 02:49:47.956582 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:47.956686 | orchestrator | 2026-01-30 02:49:47.956711 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-30 02:49:49.169858 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-30 02:49:49.169988 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-30 02:49:49.170005 | orchestrator | 2026-01-30 02:49:49.170155 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-30 02:49:49.770977 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:49.771111 | orchestrator | 2026-01-30 02:49:49.771128 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-30 02:49:50.124792 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:50.124894 | orchestrator | 2026-01-30 02:49:50.124908 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-30 02:49:50.468974 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:50.469148 | orchestrator | 2026-01-30 02:49:50.469167 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-30 02:49:50.521462 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:50.521588 | orchestrator | 2026-01-30 02:49:50.521603 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-30 02:49:50.593223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-30 02:49:50.593301 | orchestrator | 2026-01-30 02:49:50.593311 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-30 02:49:50.638763 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:50.638832 | orchestrator | 2026-01-30 02:49:50.638839 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-30 02:49:52.584555 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-30 02:49:52.584681 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-30 02:49:52.584702 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-30 02:49:52.584720 | orchestrator | 2026-01-30 02:49:52.584737 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-30 02:49:53.280257 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:53.280347 | orchestrator | 2026-01-30 02:49:53.280360 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-30 02:49:53.961026 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:53.961158 | orchestrator | 2026-01-30 02:49:53.961176 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-30 02:49:54.662510 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:54.662638 | orchestrator | 2026-01-30 02:49:54.662656 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-30 02:49:54.736897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-30 02:49:54.736999 | orchestrator | 2026-01-30 02:49:54.737015 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-30 02:49:54.779841 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:54.779918 | orchestrator | 2026-01-30 02:49:54.779927 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-30 02:49:55.465875 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-30 02:49:55.465976 | orchestrator | 2026-01-30 02:49:55.465992 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-30 02:49:55.552894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-30 02:49:55.552983 | orchestrator | 2026-01-30 02:49:55.552997 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-30 02:49:56.194348 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:56.194476 | orchestrator | 2026-01-30 02:49:56.194506 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-30 02:49:56.755325 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:56.755428 | orchestrator | 2026-01-30 02:49:56.755444 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-30 02:49:56.810356 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:49:56.810453 | orchestrator | 2026-01-30 02:49:56.810467 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-30 02:49:56.861168 | orchestrator | ok: [testbed-manager] 2026-01-30 02:49:56.861263 | orchestrator | 2026-01-30 02:49:56.861278 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-30 02:49:57.629392 | orchestrator | changed: [testbed-manager] 2026-01-30 02:49:57.629493 | orchestrator | 2026-01-30 02:49:57.629511 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-30 02:50:57.976117 | orchestrator | changed: [testbed-manager] 2026-01-30 02:50:57.976265 | orchestrator | 2026-01-30 02:50:57.976283 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-30 02:50:58.947717 | orchestrator | ok: [testbed-manager] 2026-01-30 02:50:58.947818 | orchestrator | 2026-01-30 02:50:58.947834 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-30 02:50:59.007382 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:50:59.007479 | orchestrator | 2026-01-30 02:50:59.007495 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-30 02:51:01.711443 | orchestrator | changed: [testbed-manager] 2026-01-30 02:51:01.711570 | orchestrator | 2026-01-30 02:51:01.711595 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-30 02:51:01.821277 | orchestrator | ok: [testbed-manager] 2026-01-30 02:51:01.821366 | orchestrator | 2026-01-30 02:51:01.821376 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-30 02:51:01.821384 | orchestrator | 2026-01-30 02:51:01.821390 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-30 02:51:01.865700 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:51:01.865822 | orchestrator | 2026-01-30 02:51:01.865847 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-30 02:52:01.915533 | orchestrator | Pausing for 60 seconds 2026-01-30 02:52:01.915655 | orchestrator | changed: [testbed-manager] 2026-01-30 02:52:01.915671 | orchestrator | 2026-01-30 02:52:01.915684 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-30 02:52:04.442556 | orchestrator | changed: [testbed-manager] 2026-01-30 02:52:04.442663 | orchestrator | 2026-01-30 02:52:04.442679 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-30 02:52:45.920888 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-30 02:52:45.921032 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-30 02:52:45.921050 | orchestrator | changed: [testbed-manager] 2026-01-30 02:52:45.921064 | orchestrator | 2026-01-30 02:52:45.921077 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-30 02:52:55.566125 | orchestrator | changed: [testbed-manager] 2026-01-30 02:52:55.566271 | orchestrator | 2026-01-30 02:52:55.566290 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-30 02:52:55.664406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-30 02:52:55.664525 | orchestrator | 2026-01-30 02:52:55.664551 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-30 02:52:55.664573 | orchestrator | 2026-01-30 02:52:55.664592 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-30 02:52:55.720795 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:52:55.720914 | orchestrator | 2026-01-30 02:52:55.720941 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-30 02:52:55.791129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-30 02:52:55.791240 | orchestrator | 2026-01-30 02:52:55.791259 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-30 02:52:56.570907 | orchestrator | changed: [testbed-manager] 2026-01-30 02:52:56.571010 | orchestrator | 2026-01-30 02:52:56.571026 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-30 02:52:59.863804 | orchestrator | ok: [testbed-manager] 2026-01-30 02:52:59.863905 | orchestrator | 2026-01-30 02:52:59.863921 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-30 02:52:59.943223 | orchestrator | ok: [testbed-manager] => { 2026-01-30 02:52:59.943318 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-30 02:52:59.943333 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-30 02:52:59.943345 | orchestrator | "Checking running containers against expected versions...", 2026-01-30 02:52:59.943358 | orchestrator | "", 2026-01-30 02:52:59.943447 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-30 02:52:59.943461 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-30 02:52:59.943473 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943485 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-30 02:52:59.943496 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943507 | orchestrator | "", 2026-01-30 02:52:59.943518 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-30 02:52:59.943557 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-30 02:52:59.943568 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943579 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-30 02:52:59.943590 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943601 | orchestrator | "", 2026-01-30 02:52:59.943611 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-30 02:52:59.943622 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-30 02:52:59.943633 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943644 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-30 02:52:59.943654 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943665 | orchestrator | "", 2026-01-30 02:52:59.943675 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-30 02:52:59.943686 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-30 02:52:59.943697 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943708 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-30 02:52:59.943718 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943729 | orchestrator | "", 2026-01-30 02:52:59.943742 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-30 02:52:59.943754 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-30 02:52:59.943767 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943779 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-30 02:52:59.943791 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943803 | orchestrator | "", 2026-01-30 02:52:59.943815 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-30 02:52:59.943828 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.943840 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943852 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.943864 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943877 | orchestrator | "", 2026-01-30 02:52:59.943889 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-30 02:52:59.943901 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-30 02:52:59.943914 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943926 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-30 02:52:59.943938 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.943949 | orchestrator | "", 2026-01-30 02:52:59.943961 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-30 02:52:59.943973 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-30 02:52:59.943985 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.943997 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-30 02:52:59.944009 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944020 | orchestrator | "", 2026-01-30 02:52:59.944032 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-30 02:52:59.944044 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-30 02:52:59.944056 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944068 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-30 02:52:59.944080 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944092 | orchestrator | "", 2026-01-30 02:52:59.944104 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-30 02:52:59.944115 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-30 02:52:59.944125 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944136 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-30 02:52:59.944147 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944157 | orchestrator | "", 2026-01-30 02:52:59.944168 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-30 02:52:59.944184 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944195 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944205 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944216 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944227 | orchestrator | "", 2026-01-30 02:52:59.944238 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-30 02:52:59.944249 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944259 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944271 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944281 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944292 | orchestrator | "", 2026-01-30 02:52:59.944303 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-30 02:52:59.944314 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944325 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944335 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944346 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944356 | orchestrator | "", 2026-01-30 02:52:59.944385 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-30 02:52:59.944396 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944407 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944418 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944448 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944459 | orchestrator | "", 2026-01-30 02:52:59.944470 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-30 02:52:59.944481 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944501 | orchestrator | " Enabled: true", 2026-01-30 02:52:59.944512 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-30 02:52:59.944523 | orchestrator | " Status: ✅ MATCH", 2026-01-30 02:52:59.944533 | orchestrator | "", 2026-01-30 02:52:59.944544 | orchestrator | "=== Summary ===", 2026-01-30 02:52:59.944555 | orchestrator | "Errors (version mismatches): 0", 2026-01-30 02:52:59.944566 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-30 02:52:59.944577 | orchestrator | "", 2026-01-30 02:52:59.944588 | orchestrator | "✅ All running containers match expected versions!" 2026-01-30 02:52:59.944599 | orchestrator | ] 2026-01-30 02:52:59.944611 | orchestrator | } 2026-01-30 02:52:59.944622 | orchestrator | 2026-01-30 02:52:59.944633 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-30 02:53:00.005553 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:53:00.005654 | orchestrator | 2026-01-30 02:53:00.005671 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:53:00.005686 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-30 02:53:00.005698 | orchestrator | 2026-01-30 02:53:00.134586 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 02:53:00.134677 | orchestrator | + deactivate 2026-01-30 02:53:00.134692 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-30 02:53:00.134706 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 02:53:00.134718 | orchestrator | + export PATH 2026-01-30 02:53:00.134729 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-30 02:53:00.134741 | orchestrator | + '[' -n '' ']' 2026-01-30 02:53:00.134752 | orchestrator | + hash -r 2026-01-30 02:53:00.134764 | orchestrator | + '[' -n '' ']' 2026-01-30 02:53:00.134775 | orchestrator | + unset VIRTUAL_ENV 2026-01-30 02:53:00.134786 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-30 02:53:00.134798 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-30 02:53:00.134809 | orchestrator | + unset -f deactivate 2026-01-30 02:53:00.134821 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-30 02:53:00.141557 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 02:53:00.141630 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-30 02:53:00.141642 | orchestrator | + local max_attempts=60 2026-01-30 02:53:00.141653 | orchestrator | + local name=ceph-ansible 2026-01-30 02:53:00.141665 | orchestrator | + local attempt_num=1 2026-01-30 02:53:00.142579 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 02:53:00.183262 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 02:53:00.183349 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-30 02:53:00.183388 | orchestrator | + local max_attempts=60 2026-01-30 02:53:00.183402 | orchestrator | + local name=kolla-ansible 2026-01-30 02:53:00.183413 | orchestrator | + local attempt_num=1 2026-01-30 02:53:00.183702 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-30 02:53:00.211906 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 02:53:00.212006 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-30 02:53:00.212022 | orchestrator | + local max_attempts=60 2026-01-30 02:53:00.212037 | orchestrator | + local name=osism-ansible 2026-01-30 02:53:00.212055 | orchestrator | + local attempt_num=1 2026-01-30 02:53:00.212258 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-30 02:53:00.248596 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 02:53:00.248678 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-30 02:53:00.248688 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-30 02:53:00.919267 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-30 02:53:01.090966 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-30 02:53:01.091067 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-01-30 02:53:01.091083 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-01-30 02:53:01.091095 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-01-30 02:53:01.091108 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-01-30 02:53:01.091143 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-01-30 02:53:01.091155 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-01-30 02:53:01.091166 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-01-30 02:53:01.091177 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-01-30 02:53:01.091188 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-01-30 02:53:01.091199 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-01-30 02:53:01.091209 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-01-30 02:53:01.091243 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-01-30 02:53:01.091255 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-01-30 02:53:01.091267 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-01-30 02:53:01.091278 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-01-30 02:53:01.096851 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-30 02:53:01.143469 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 02:53:01.143549 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-30 02:53:01.146547 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-30 02:53:13.277862 | orchestrator | 2026-01-30 02:53:13 | INFO  | Task 812a6e2d-c7e3-46bf-97ee-04bd2495bfeb (resolvconf) was prepared for execution. 2026-01-30 02:53:13.277954 | orchestrator | 2026-01-30 02:53:13 | INFO  | It takes a moment until task 812a6e2d-c7e3-46bf-97ee-04bd2495bfeb (resolvconf) has been started and output is visible here. 2026-01-30 02:53:26.284966 | orchestrator | 2026-01-30 02:53:26.285103 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-30 02:53:26.285120 | orchestrator | 2026-01-30 02:53:26.285132 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:53:26.285144 | orchestrator | Friday 30 January 2026 02:53:17 +0000 (0:00:00.101) 0:00:00.101 ******** 2026-01-30 02:53:26.285156 | orchestrator | ok: [testbed-manager] 2026-01-30 02:53:26.285167 | orchestrator | 2026-01-30 02:53:26.285178 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-30 02:53:26.285190 | orchestrator | Friday 30 January 2026 02:53:20 +0000 (0:00:03.641) 0:00:03.742 ******** 2026-01-30 02:53:26.285201 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:53:26.285214 | orchestrator | 2026-01-30 02:53:26.285225 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-30 02:53:26.285236 | orchestrator | Friday 30 January 2026 02:53:20 +0000 (0:00:00.060) 0:00:03.803 ******** 2026-01-30 02:53:26.285247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-30 02:53:26.285260 | orchestrator | 2026-01-30 02:53:26.285271 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-30 02:53:26.285282 | orchestrator | Friday 30 January 2026 02:53:20 +0000 (0:00:00.078) 0:00:03.882 ******** 2026-01-30 02:53:26.285311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 02:53:26.285323 | orchestrator | 2026-01-30 02:53:26.285334 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-30 02:53:26.285345 | orchestrator | Friday 30 January 2026 02:53:20 +0000 (0:00:00.071) 0:00:03.953 ******** 2026-01-30 02:53:26.285356 | orchestrator | ok: [testbed-manager] 2026-01-30 02:53:26.285367 | orchestrator | 2026-01-30 02:53:26.285377 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-30 02:53:26.285389 | orchestrator | Friday 30 January 2026 02:53:21 +0000 (0:00:00.982) 0:00:04.935 ******** 2026-01-30 02:53:26.285451 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:53:26.285465 | orchestrator | 2026-01-30 02:53:26.285476 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-30 02:53:26.285508 | orchestrator | Friday 30 January 2026 02:53:21 +0000 (0:00:00.066) 0:00:05.002 ******** 2026-01-30 02:53:26.285522 | orchestrator | ok: [testbed-manager] 2026-01-30 02:53:26.285534 | orchestrator | 2026-01-30 02:53:26.285547 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-30 02:53:26.285560 | orchestrator | Friday 30 January 2026 02:53:22 +0000 (0:00:00.502) 0:00:05.504 ******** 2026-01-30 02:53:26.285572 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:53:26.285584 | orchestrator | 2026-01-30 02:53:26.285597 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-30 02:53:26.285612 | orchestrator | Friday 30 January 2026 02:53:22 +0000 (0:00:00.073) 0:00:05.578 ******** 2026-01-30 02:53:26.285624 | orchestrator | changed: [testbed-manager] 2026-01-30 02:53:26.285637 | orchestrator | 2026-01-30 02:53:26.285650 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-30 02:53:26.285662 | orchestrator | Friday 30 January 2026 02:53:22 +0000 (0:00:00.515) 0:00:06.093 ******** 2026-01-30 02:53:26.285675 | orchestrator | changed: [testbed-manager] 2026-01-30 02:53:26.285687 | orchestrator | 2026-01-30 02:53:26.285701 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-30 02:53:26.285713 | orchestrator | Friday 30 January 2026 02:53:24 +0000 (0:00:01.018) 0:00:07.112 ******** 2026-01-30 02:53:26.285726 | orchestrator | ok: [testbed-manager] 2026-01-30 02:53:26.285738 | orchestrator | 2026-01-30 02:53:26.285751 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-30 02:53:26.285763 | orchestrator | Friday 30 January 2026 02:53:24 +0000 (0:00:00.902) 0:00:08.015 ******** 2026-01-30 02:53:26.285776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-30 02:53:26.285789 | orchestrator | 2026-01-30 02:53:26.285802 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-30 02:53:26.285815 | orchestrator | Friday 30 January 2026 02:53:24 +0000 (0:00:00.075) 0:00:08.090 ******** 2026-01-30 02:53:26.285827 | orchestrator | changed: [testbed-manager] 2026-01-30 02:53:26.285840 | orchestrator | 2026-01-30 02:53:26.285853 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:53:26.285866 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 02:53:26.285877 | orchestrator | 2026-01-30 02:53:26.285900 | orchestrator | 2026-01-30 02:53:26.285922 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:53:26.285934 | orchestrator | Friday 30 January 2026 02:53:26 +0000 (0:00:01.083) 0:00:09.174 ******** 2026-01-30 02:53:26.285945 | orchestrator | =============================================================================== 2026-01-30 02:53:26.285956 | orchestrator | Gathering Facts --------------------------------------------------------- 3.64s 2026-01-30 02:53:26.285967 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2026-01-30 02:53:26.285978 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.02s 2026-01-30 02:53:26.285989 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.98s 2026-01-30 02:53:26.286000 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2026-01-30 02:53:26.286011 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-01-30 02:53:26.286120 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-01-30 02:53:26.286133 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-30 02:53:26.286144 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-30 02:53:26.286155 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-01-30 02:53:26.286176 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-30 02:53:26.286187 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-01-30 02:53:26.286198 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-30 02:53:26.535350 | orchestrator | + osism apply sshconfig 2026-01-30 02:53:38.468378 | orchestrator | 2026-01-30 02:53:38 | INFO  | Task 09622e37-7b9d-41cc-ab31-27d6dfd12b81 (sshconfig) was prepared for execution. 2026-01-30 02:53:38.468554 | orchestrator | 2026-01-30 02:53:38 | INFO  | It takes a moment until task 09622e37-7b9d-41cc-ab31-27d6dfd12b81 (sshconfig) has been started and output is visible here. 2026-01-30 02:53:49.130130 | orchestrator | 2026-01-30 02:53:49.130216 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-30 02:53:49.130224 | orchestrator | 2026-01-30 02:53:49.130245 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-30 02:53:49.130251 | orchestrator | Friday 30 January 2026 02:53:42 +0000 (0:00:00.138) 0:00:00.138 ******** 2026-01-30 02:53:49.130257 | orchestrator | ok: [testbed-manager] 2026-01-30 02:53:49.130306 | orchestrator | 2026-01-30 02:53:49.130312 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-30 02:53:49.130317 | orchestrator | Friday 30 January 2026 02:53:42 +0000 (0:00:00.502) 0:00:00.640 ******** 2026-01-30 02:53:49.130322 | orchestrator | changed: [testbed-manager] 2026-01-30 02:53:49.130329 | orchestrator | 2026-01-30 02:53:49.130334 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-30 02:53:49.130339 | orchestrator | Friday 30 January 2026 02:53:43 +0000 (0:00:00.407) 0:00:01.048 ******** 2026-01-30 02:53:49.130344 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-30 02:53:49.130350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-30 02:53:49.130355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-30 02:53:49.130360 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-30 02:53:49.130365 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-30 02:53:49.130370 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-30 02:53:49.130374 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-30 02:53:49.130379 | orchestrator | 2026-01-30 02:53:49.130384 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-30 02:53:49.130389 | orchestrator | Friday 30 January 2026 02:53:48 +0000 (0:00:05.140) 0:00:06.188 ******** 2026-01-30 02:53:49.130394 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:53:49.130399 | orchestrator | 2026-01-30 02:53:49.130403 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-30 02:53:49.130408 | orchestrator | Friday 30 January 2026 02:53:48 +0000 (0:00:00.068) 0:00:06.257 ******** 2026-01-30 02:53:49.130413 | orchestrator | changed: [testbed-manager] 2026-01-30 02:53:49.130418 | orchestrator | 2026-01-30 02:53:49.130423 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:53:49.130428 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 02:53:49.130455 | orchestrator | 2026-01-30 02:53:49.130460 | orchestrator | 2026-01-30 02:53:49.130465 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:53:49.130470 | orchestrator | Friday 30 January 2026 02:53:48 +0000 (0:00:00.557) 0:00:06.814 ******** 2026-01-30 02:53:49.130475 | orchestrator | =============================================================================== 2026-01-30 02:53:49.130479 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.14s 2026-01-30 02:53:49.130484 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-01-30 02:53:49.130489 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2026-01-30 02:53:49.130515 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.41s 2026-01-30 02:53:49.130520 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-30 02:53:49.375863 | orchestrator | + osism apply known-hosts 2026-01-30 02:54:01.399334 | orchestrator | 2026-01-30 02:54:01 | INFO  | Task 342cb3ac-1e89-4186-9e1e-29479a4ba097 (known-hosts) was prepared for execution. 2026-01-30 02:54:01.399508 | orchestrator | 2026-01-30 02:54:01 | INFO  | It takes a moment until task 342cb3ac-1e89-4186-9e1e-29479a4ba097 (known-hosts) has been started and output is visible here. 2026-01-30 02:54:17.543349 | orchestrator | 2026-01-30 02:54:17.543467 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-30 02:54:17.543558 | orchestrator | 2026-01-30 02:54:17.543571 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-30 02:54:17.543585 | orchestrator | Friday 30 January 2026 02:54:05 +0000 (0:00:00.154) 0:00:00.154 ******** 2026-01-30 02:54:17.543597 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-30 02:54:17.543610 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-30 02:54:17.543621 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-30 02:54:17.543632 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-30 02:54:17.543643 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-30 02:54:17.543654 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-30 02:54:17.543665 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-30 02:54:17.543676 | orchestrator | 2026-01-30 02:54:17.543687 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-30 02:54:17.543700 | orchestrator | Friday 30 January 2026 02:54:11 +0000 (0:00:05.827) 0:00:05.982 ******** 2026-01-30 02:54:17.543712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-30 02:54:17.543725 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-30 02:54:17.543737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-30 02:54:17.543748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-30 02:54:17.543759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-30 02:54:17.543781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-30 02:54:17.543793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-30 02:54:17.543804 | orchestrator | 2026-01-30 02:54:17.543815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.543826 | orchestrator | Friday 30 January 2026 02:54:11 +0000 (0:00:00.155) 0:00:06.137 ******** 2026-01-30 02:54:17.543837 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGClfjfHf7HBOaNskxr608fHv4atPua4VVGn7zCzh6RP) 2026-01-30 02:54:17.543858 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDItXT22qP6QhT2Z0JJw3a/Paik1fMJd8sM0SN4TDpw5r8Od+3u+7spcqgT9AlRDks1PhgR6WCWbDmfjU+yRXpFcfLOysx4LXFdDxXhqHuC8X+Tl5YplsnKzBh5z8iwQr1B+UsKMBmvUIf37AQQ9AmuZ4ht5zIpmU3bEphp87Z+CoZe1QbGfRc5kUkWVE4e7CrHd7Jp4U10WuXUGwmCuNwKYgPwE8XEuISY0iR1swV2/S43MaSSaPweL6oF+ip4GtAeHwL2FPC977F1ywzS7I8iFG0KyyLZ2x+7MK+A7QA1RtGnFmxLWTdaPBFk3x+TXGNHJZDzTCaUyPDvLQ0u7bkc1KTAPLwFg540tCDAwL09OGtooVGujj6r2SGxSAoKCArFqsoqvtzEHKO0Rq0virBWsf8RHOps8durAvWlXH3fsuHwLhK2wCeMFYq7L/aA/st1F0Ns5QLMm8DqnW1449ZL3Lc0HEq2Do8LRuquJOCVYmNx2h5UaiIKiNUs2306k+0=) 2026-01-30 02:54:17.543898 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzlqrFOuy5K4S3RKAmyI5xiTKA0hjpAOtK5sXbhM6rQ8o/3fGFgucY0kYaBv1e8naFuukefSaQXuFUX706ayao=) 2026-01-30 02:54:17.543913 | orchestrator | 2026-01-30 02:54:17.543925 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.543938 | orchestrator | Friday 30 January 2026 02:54:12 +0000 (0:00:01.140) 0:00:07.278 ******** 2026-01-30 02:54:17.543971 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhnUvSP0rsJr9plUYP61CJ5KZwcfsFqM9YBC7DZLHf6tt1VBGXvn32UtZaH4bsxdCLj38r7GB5wtfAKYSYU9eZdMb50NIEJx9ldQFVkKf+WJ8Jv6K+xorIi20IVLk1lZ1OjL1letpQIAJ4VqBS+5dtRDi5htW6+4GHwKDIoLaI8xIQQPwtuBJRojE6xKkES9SCVAWoaDiSohW8A6MfgtY1jC56iLayOFYxSfuf3c5kOeb9Sgy9IWUeH50lvOMNJQvjiS6SzCwklQUobYszteaq675Pj9kpPMo8WvagWTIgr4oglrMdycMCYjqmNvNkt/I9LGZfL/YtslWYTnI7L0tpyU3mjohIjQbchWjxgTb/QpbZxNj0CXmjVK9bVSCO5ddiFeJztlrhS2BCg/LUEE5I+fy7jSlbRCMu0PzPtY/lBQegPQCEA9uxqr/au8ivt4KKLzHrGxt591BRpqjBb+9CgLBz2xSa0h/XLZZbm3GdeED+CsRQNJQMtPNtRqliKAk=) 2026-01-30 02:54:17.543986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHWpN/QHTGd4MC7nhCuPg3xanvqAdMJjW2nnqdcFdvsdsoJy1PkNTs4uBJEEmBmRYtya4gjLSdouf2xBBuwS0ek=) 2026-01-30 02:54:17.543999 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFvUviy4k98NmGpXS+dnPIArNjUw77QTkzkMAnd/8W2x) 2026-01-30 02:54:17.544012 | orchestrator | 2026-01-30 02:54:17.544025 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.544038 | orchestrator | Friday 30 January 2026 02:54:13 +0000 (0:00:00.989) 0:00:08.267 ******** 2026-01-30 02:54:17.544051 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGdwraQExxsJmNg541xjD6+/kgefjJyRamTKpP7bCW85FYUdxii1SQQZ83VS4gKHNXgR3kl2umnbFaagvRfIdFQ=) 2026-01-30 02:54:17.544065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzAYDc6VTQ0EWjyCQiEJGSEZbaqO8lylz1rAIEUj0NOgPLd057G6NaUdL3Bng19kzJc6uE/4Wa+Az/4g4W1K/MkKcw5gKMQ9AtzZfNZ1U4kom/DF8WPtRokt7FZ9WVZoNi0B279Oh09zMwlEqLGLc3l6EjeNg8GwHtXESx6hsoIU0ub67T49kIxHIOvHKVAGN+Rvau2+KIs4FnNAYLQFm0TxnxLOG/0xMNeaqsQd7MW+VQimN8Iu/gnbeS8O7jQwLnjj3aC8rk6lnoBnV2ZTtVNAVeG0kYiU0xC1ZNR130gw2+h2Qkxp9Nx1As7nWV1cZ1heGVV2H2mUJQ2025m88GIRbPJC8nwTBNQWijxkxzFoNL2wcyAmPD3vM79YGRT9KEAVQttTf7HLT+4UmadCzHNarqPFpbTnYKWw6mHZL1TkUQ+CkFuL64UJloPzhr1LG6byQwd3CqZZYxGHfNIRyxWw96iglvf1HnbYurNPckbNlPMdkn0Kmt53kb0AtYrQ0=) 2026-01-30 02:54:17.544078 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxu7ZT64FFifLk2oGIjDIhsvg3bwG0GFEWjOBY1GZiA) 2026-01-30 02:54:17.544090 | orchestrator | 2026-01-30 02:54:17.544103 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.544116 | orchestrator | Friday 30 January 2026 02:54:14 +0000 (0:00:00.993) 0:00:09.261 ******** 2026-01-30 02:54:17.544128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHtjw/WivjFjphM+X6QSSRYnmTldtiF1rYEAgyjBBxaTV0dE87x+/6Zf9YWwbIaKVAlOQWqGTeBGaWcElbVbN0=) 2026-01-30 02:54:17.544142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPjN+aqH/gwsd8JhOXMdFTv3qM7tTw6Vcfw5Y6GmaEYePr0pknaeWAxBW0GGCd7DuH6nunWD2IKV00d5hDM5A7YuxAcWr/BW7uIdoEizbmMu4/db0KjVbl62bS6fpNlWGZD8XjtSyk5BqO44PZmCFLQQ61l5paBnbAt9gNHi4vWYu7vhu7JPFhg0wvE5okdVJ4hiGoD8PSmxFxdk8hYFbavUAd9VdYPv0EkpSZ2irYuwzNkaMykhSV7jUP19V9oIYsHMTGN6JkVKwV3lTsfAcIEY5/T4DvD5eFW83VQZbDSixEtbTxByUmhd9jMNfjaKLCnUrRhGZgHNkX/zQU2Ae6zBeB429AB0oSs7POiB1+n6LjS8rCH/6fh7NDyMUOUtIampNZXYZqXpiCUoyG5qNSBD+8LKYn0o8QumQ28HqOqlAZUR4B2cvJblywkPGL5ZVri1w6nNKNsAJtTWf40UfapaYg/ZX/ueAyv8y0y/iPmSf1qSenT6QwuEQEE2X8NWM=) 2026-01-30 02:54:17.544162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfkqJd5AAgrzsJucmMyq98WicRJYn7dNb+LfGogKo9E) 2026-01-30 02:54:17.544175 | orchestrator | 2026-01-30 02:54:17.544188 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.544202 | orchestrator | Friday 30 January 2026 02:54:15 +0000 (0:00:01.003) 0:00:10.264 ******** 2026-01-30 02:54:17.544290 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINMRJitmX6rtAa4nq+npECGm0mIaF6rrufGOyfJo+EIN) 2026-01-30 02:54:17.544302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK8jV/Rvhjorl0L8MEquD9AS3NvGD/BKcmIvTp42cewbMZxf7a8aXRuXRQhDR61gH6F2OpuRpvdXszuW/zMeTffmEQ8bffJwNnoY42/5IR4LSwYp9MTTe9BQ401FJc5L6hKwvOzLoOWV3aFvu0xqEtMwnJkQuQ9sGiLITT16IwX08TXjdL+I3W8ysZouu16Jw1/qmN+C4EWkwmRObqob+mv8h0BIrcmljUun74fuV87tKVMkGCA0gN4pG6njNYsI825dVdTuYd8x26P5oCBXXzxU6HY8aQM7J1Cu90l5w+hIUJ2ipN3vMnlSm5gvgg2LPzg7b64KPkvF1fFJhfdjewE0vsRKrcEsujN1QCn3G2TT38iOCi+VpOvBDhy5I6l+PtXTh2j7UcvTVwV9chbOlZ9BXrmV3xS5UuMa0w6FxDpEgBxXBzAiJlD45ZIF+kIfpfPDov8YpWQn0KTsogLwT4Lj3OHmPrrw7z1lyQUO0Gd5IYOZRFoT05hWwH3nShqVU=) 2026-01-30 02:54:17.544314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBecPGDj9Tu7ZkqsN1M3q74GE/8NbCVAILeifNS5f5dLZIc1tl/vv7aRmXI7sRAIaJ/uPmvBoKW34DbJzSBDemg=) 2026-01-30 02:54:17.544325 | orchestrator | 2026-01-30 02:54:17.544336 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:17.544347 | orchestrator | Friday 30 January 2026 02:54:16 +0000 (0:00:01.014) 0:00:11.279 ******** 2026-01-30 02:54:17.544367 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDxSn7gFPMName8A0VYn9UWgyilCjPKl5UAI7vOAk6W2lkwPy8b6Yby6vCACFc4ajcnNg+ryhnM5oFNe3KBMN1vaeAuTDxw8pUVcHsFlGkkq0k3m3yBkbumrVJgZeKi5XabbVf+Us6c1RBTcBLVtAf7KZhXjTUnMaAMsR87ltgX4KodDZoi5uIWvv9StUYjeDF3yVUYt2dXn4F/6AjxmVYTynDJlfsEKrLkAIqmygGABYMOObSM6ETRvfWb5fyUKG4dBy7R/Wk82QN6gxFXPB9GW2uB5fqGgeUna3TWebVQItxEQfnvsI54MZbeRsUF5Oea0T7Ilbha/ZAmtuIeapZGIDc9Rw9eyXb0zCaJgHpaecR7ID/voqYpvyAIAr/a4ZYuKvV8a3ghihG8TpSQyfANkRMLXmaLN083eEcusVtL7ssPbXD42zWmhxDLi1TwiZ6XReRkFkDD6a2jdL2DuNtAcScoOt8SPvkhWREmD5uqsfHJH0Li3EkRRb9XFckvVS8=) 2026-01-30 02:54:28.451256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1Q2a4dvSBiWhoH6N0ngQv5aY2WQRBnco+e5/pgK0/N) 2026-01-30 02:54:28.451365 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBChT9YV8Y+soOIXszSwsOhuIADmKYbfLa3Q1dUx9i4acVCoIbyQdyqbtVbKBQWjGwt3BNvGs3gtfsQfIj7wr4pw=) 2026-01-30 02:54:28.451381 | orchestrator | 2026-01-30 02:54:28.451393 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:28.451405 | orchestrator | Friday 30 January 2026 02:54:17 +0000 (0:00:01.020) 0:00:12.300 ******** 2026-01-30 02:54:28.451415 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUnUNEPwqqT+oO0oJsmmcPXOB6u3IkSnbzRQprgrZMn) 2026-01-30 02:54:28.451427 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm6A+vX++FgDhoo4NR0PoPoG1WurGAdshXjpBDynBJpQtesjMmrW7ERdrjjXv/c91suCX4HEOoFC6WYwYQsOeWebM55IyHK3wYd+t6S/ukHy30D8SJXxnZx0nfflR/Z7/D/ZsHDYzc37d4UyIaA1DkomDVKLfA/JczjtPjEkQzDLstyIDhqyxvtfnqAu8Cf22Z0qpn/0e3xbTvFdU8Hj/mcLmQDJ35WL9L2I4Nvb55xVHay5uU7seWB9mNHzc4i/K1ZKSl6k3yrxikSF/TaINLse+9h81F9kJHD9W5hnzZB/MVRwVScXUI19/OSGfqVZAXw8Eti58GEmTVgIqn+/Nvm6APlyYa8fEu5EQBwIpbBLGYTkvMvWxkzGqtQIhgkCrTXsFkYiKzGGrSmD2WMfTiW8NxcMDM/NEhSp2S25S7IxHuAED4RU/qN8rp6ig0tgx2f/vXtwAxcfJfhoGOVEJwmV4cjLWVTtEq5kjpCQj4otvtpNHKHhSDylks28krrNc=) 2026-01-30 02:54:28.451467 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJISqcWxV3GTfcl9A/DW9lFvCNdEHMnrTLf+g6G8edwwBu8xDStq82Hl1Jkm+4MEC2zWF6SL5PN3N+vXd7SPg0=) 2026-01-30 02:54:28.451478 | orchestrator | 2026-01-30 02:54:28.451549 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-30 02:54:28.451561 | orchestrator | Friday 30 January 2026 02:54:19 +0000 (0:00:02.006) 0:00:14.306 ******** 2026-01-30 02:54:28.451571 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-30 02:54:28.451592 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-30 02:54:28.451602 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-30 02:54:28.451612 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-30 02:54:28.451622 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-30 02:54:28.451631 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-30 02:54:28.451641 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-30 02:54:28.451651 | orchestrator | 2026-01-30 02:54:28.451661 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-30 02:54:28.451672 | orchestrator | Friday 30 January 2026 02:54:24 +0000 (0:00:05.085) 0:00:19.391 ******** 2026-01-30 02:54:28.451682 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-30 02:54:28.451694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-30 02:54:28.451704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-30 02:54:28.451714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-30 02:54:28.451724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-30 02:54:28.451733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-30 02:54:28.451743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-30 02:54:28.451752 | orchestrator | 2026-01-30 02:54:28.451762 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:28.451772 | orchestrator | Friday 30 January 2026 02:54:24 +0000 (0:00:00.168) 0:00:19.560 ******** 2026-01-30 02:54:28.451782 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGClfjfHf7HBOaNskxr608fHv4atPua4VVGn7zCzh6RP) 2026-01-30 02:54:28.451834 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDItXT22qP6QhT2Z0JJw3a/Paik1fMJd8sM0SN4TDpw5r8Od+3u+7spcqgT9AlRDks1PhgR6WCWbDmfjU+yRXpFcfLOysx4LXFdDxXhqHuC8X+Tl5YplsnKzBh5z8iwQr1B+UsKMBmvUIf37AQQ9AmuZ4ht5zIpmU3bEphp87Z+CoZe1QbGfRc5kUkWVE4e7CrHd7Jp4U10WuXUGwmCuNwKYgPwE8XEuISY0iR1swV2/S43MaSSaPweL6oF+ip4GtAeHwL2FPC977F1ywzS7I8iFG0KyyLZ2x+7MK+A7QA1RtGnFmxLWTdaPBFk3x+TXGNHJZDzTCaUyPDvLQ0u7bkc1KTAPLwFg540tCDAwL09OGtooVGujj6r2SGxSAoKCArFqsoqvtzEHKO0Rq0virBWsf8RHOps8durAvWlXH3fsuHwLhK2wCeMFYq7L/aA/st1F0Ns5QLMm8DqnW1449ZL3Lc0HEq2Do8LRuquJOCVYmNx2h5UaiIKiNUs2306k+0=) 2026-01-30 02:54:28.451857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzlqrFOuy5K4S3RKAmyI5xiTKA0hjpAOtK5sXbhM6rQ8o/3fGFgucY0kYaBv1e8naFuukefSaQXuFUX706ayao=) 2026-01-30 02:54:28.451868 | orchestrator | 2026-01-30 02:54:28.451879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:28.451891 | orchestrator | Friday 30 January 2026 02:54:25 +0000 (0:00:00.945) 0:00:20.506 ******** 2026-01-30 02:54:28.451902 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFvUviy4k98NmGpXS+dnPIArNjUw77QTkzkMAnd/8W2x) 2026-01-30 02:54:28.451914 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDhnUvSP0rsJr9plUYP61CJ5KZwcfsFqM9YBC7DZLHf6tt1VBGXvn32UtZaH4bsxdCLj38r7GB5wtfAKYSYU9eZdMb50NIEJx9ldQFVkKf+WJ8Jv6K+xorIi20IVLk1lZ1OjL1letpQIAJ4VqBS+5dtRDi5htW6+4GHwKDIoLaI8xIQQPwtuBJRojE6xKkES9SCVAWoaDiSohW8A6MfgtY1jC56iLayOFYxSfuf3c5kOeb9Sgy9IWUeH50lvOMNJQvjiS6SzCwklQUobYszteaq675Pj9kpPMo8WvagWTIgr4oglrMdycMCYjqmNvNkt/I9LGZfL/YtslWYTnI7L0tpyU3mjohIjQbchWjxgTb/QpbZxNj0CXmjVK9bVSCO5ddiFeJztlrhS2BCg/LUEE5I+fy7jSlbRCMu0PzPtY/lBQegPQCEA9uxqr/au8ivt4KKLzHrGxt591BRpqjBb+9CgLBz2xSa0h/XLZZbm3GdeED+CsRQNJQMtPNtRqliKAk=) 2026-01-30 02:54:28.451925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHWpN/QHTGd4MC7nhCuPg3xanvqAdMJjW2nnqdcFdvsdsoJy1PkNTs4uBJEEmBmRYtya4gjLSdouf2xBBuwS0ek=) 2026-01-30 02:54:28.451937 | orchestrator | 2026-01-30 02:54:28.451949 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:28.451960 | orchestrator | Friday 30 January 2026 02:54:26 +0000 (0:00:00.903) 0:00:21.410 ******** 2026-01-30 02:54:28.451971 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPxu7ZT64FFifLk2oGIjDIhsvg3bwG0GFEWjOBY1GZiA) 2026-01-30 02:54:28.451982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzAYDc6VTQ0EWjyCQiEJGSEZbaqO8lylz1rAIEUj0NOgPLd057G6NaUdL3Bng19kzJc6uE/4Wa+Az/4g4W1K/MkKcw5gKMQ9AtzZfNZ1U4kom/DF8WPtRokt7FZ9WVZoNi0B279Oh09zMwlEqLGLc3l6EjeNg8GwHtXESx6hsoIU0ub67T49kIxHIOvHKVAGN+Rvau2+KIs4FnNAYLQFm0TxnxLOG/0xMNeaqsQd7MW+VQimN8Iu/gnbeS8O7jQwLnjj3aC8rk6lnoBnV2ZTtVNAVeG0kYiU0xC1ZNR130gw2+h2Qkxp9Nx1As7nWV1cZ1heGVV2H2mUJQ2025m88GIRbPJC8nwTBNQWijxkxzFoNL2wcyAmPD3vM79YGRT9KEAVQttTf7HLT+4UmadCzHNarqPFpbTnYKWw6mHZL1TkUQ+CkFuL64UJloPzhr1LG6byQwd3CqZZYxGHfNIRyxWw96iglvf1HnbYurNPckbNlPMdkn0Kmt53kb0AtYrQ0=) 2026-01-30 02:54:28.451993 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGdwraQExxsJmNg541xjD6+/kgefjJyRamTKpP7bCW85FYUdxii1SQQZ83VS4gKHNXgR3kl2umnbFaagvRfIdFQ=) 2026-01-30 02:54:28.452004 | orchestrator | 2026-01-30 02:54:28.452016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:28.452027 | orchestrator | Friday 30 January 2026 02:54:27 +0000 (0:00:00.889) 0:00:22.299 ******** 2026-01-30 02:54:28.452038 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPjN+aqH/gwsd8JhOXMdFTv3qM7tTw6Vcfw5Y6GmaEYePr0pknaeWAxBW0GGCd7DuH6nunWD2IKV00d5hDM5A7YuxAcWr/BW7uIdoEizbmMu4/db0KjVbl62bS6fpNlWGZD8XjtSyk5BqO44PZmCFLQQ61l5paBnbAt9gNHi4vWYu7vhu7JPFhg0wvE5okdVJ4hiGoD8PSmxFxdk8hYFbavUAd9VdYPv0EkpSZ2irYuwzNkaMykhSV7jUP19V9oIYsHMTGN6JkVKwV3lTsfAcIEY5/T4DvD5eFW83VQZbDSixEtbTxByUmhd9jMNfjaKLCnUrRhGZgHNkX/zQU2Ae6zBeB429AB0oSs7POiB1+n6LjS8rCH/6fh7NDyMUOUtIampNZXYZqXpiCUoyG5qNSBD+8LKYn0o8QumQ28HqOqlAZUR4B2cvJblywkPGL5ZVri1w6nNKNsAJtTWf40UfapaYg/ZX/ueAyv8y0y/iPmSf1qSenT6QwuEQEE2X8NWM=) 2026-01-30 02:54:28.452050 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIHtjw/WivjFjphM+X6QSSRYnmTldtiF1rYEAgyjBBxaTV0dE87x+/6Zf9YWwbIaKVAlOQWqGTeBGaWcElbVbN0=) 2026-01-30 02:54:28.452079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEfkqJd5AAgrzsJucmMyq98WicRJYn7dNb+LfGogKo9E) 2026-01-30 02:54:32.153659 | orchestrator | 2026-01-30 02:54:32.153763 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:32.153780 | orchestrator | Friday 30 January 2026 02:54:28 +0000 (0:00:00.906) 0:00:23.206 ******** 2026-01-30 02:54:32.153796 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCK8jV/Rvhjorl0L8MEquD9AS3NvGD/BKcmIvTp42cewbMZxf7a8aXRuXRQhDR61gH6F2OpuRpvdXszuW/zMeTffmEQ8bffJwNnoY42/5IR4LSwYp9MTTe9BQ401FJc5L6hKwvOzLoOWV3aFvu0xqEtMwnJkQuQ9sGiLITT16IwX08TXjdL+I3W8ysZouu16Jw1/qmN+C4EWkwmRObqob+mv8h0BIrcmljUun74fuV87tKVMkGCA0gN4pG6njNYsI825dVdTuYd8x26P5oCBXXzxU6HY8aQM7J1Cu90l5w+hIUJ2ipN3vMnlSm5gvgg2LPzg7b64KPkvF1fFJhfdjewE0vsRKrcEsujN1QCn3G2TT38iOCi+VpOvBDhy5I6l+PtXTh2j7UcvTVwV9chbOlZ9BXrmV3xS5UuMa0w6FxDpEgBxXBzAiJlD45ZIF+kIfpfPDov8YpWQn0KTsogLwT4Lj3OHmPrrw7z1lyQUO0Gd5IYOZRFoT05hWwH3nShqVU=) 2026-01-30 02:54:32.153812 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBecPGDj9Tu7ZkqsN1M3q74GE/8NbCVAILeifNS5f5dLZIc1tl/vv7aRmXI7sRAIaJ/uPmvBoKW34DbJzSBDemg=) 2026-01-30 02:54:32.153826 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINMRJitmX6rtAa4nq+npECGm0mIaF6rrufGOyfJo+EIN) 2026-01-30 02:54:32.153838 | orchestrator | 2026-01-30 02:54:32.153849 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:32.153860 | orchestrator | Friday 30 January 2026 02:54:29 +0000 (0:00:00.882) 0:00:24.089 ******** 2026-01-30 02:54:32.153871 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBChT9YV8Y+soOIXszSwsOhuIADmKYbfLa3Q1dUx9i4acVCoIbyQdyqbtVbKBQWjGwt3BNvGs3gtfsQfIj7wr4pw=) 2026-01-30 02:54:32.153883 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDxSn7gFPMName8A0VYn9UWgyilCjPKl5UAI7vOAk6W2lkwPy8b6Yby6vCACFc4ajcnNg+ryhnM5oFNe3KBMN1vaeAuTDxw8pUVcHsFlGkkq0k3m3yBkbumrVJgZeKi5XabbVf+Us6c1RBTcBLVtAf7KZhXjTUnMaAMsR87ltgX4KodDZoi5uIWvv9StUYjeDF3yVUYt2dXn4F/6AjxmVYTynDJlfsEKrLkAIqmygGABYMOObSM6ETRvfWb5fyUKG4dBy7R/Wk82QN6gxFXPB9GW2uB5fqGgeUna3TWebVQItxEQfnvsI54MZbeRsUF5Oea0T7Ilbha/ZAmtuIeapZGIDc9Rw9eyXb0zCaJgHpaecR7ID/voqYpvyAIAr/a4ZYuKvV8a3ghihG8TpSQyfANkRMLXmaLN083eEcusVtL7ssPbXD42zWmhxDLi1TwiZ6XReRkFkDD6a2jdL2DuNtAcScoOt8SPvkhWREmD5uqsfHJH0Li3EkRRb9XFckvVS8=) 2026-01-30 02:54:32.153895 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII1Q2a4dvSBiWhoH6N0ngQv5aY2WQRBnco+e5/pgK0/N) 2026-01-30 02:54:32.153905 | orchestrator | 2026-01-30 02:54:32.153917 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-30 02:54:32.153929 | orchestrator | Friday 30 January 2026 02:54:30 +0000 (0:00:00.933) 0:00:25.023 ******** 2026-01-30 02:54:32.153959 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCm6A+vX++FgDhoo4NR0PoPoG1WurGAdshXjpBDynBJpQtesjMmrW7ERdrjjXv/c91suCX4HEOoFC6WYwYQsOeWebM55IyHK3wYd+t6S/ukHy30D8SJXxnZx0nfflR/Z7/D/ZsHDYzc37d4UyIaA1DkomDVKLfA/JczjtPjEkQzDLstyIDhqyxvtfnqAu8Cf22Z0qpn/0e3xbTvFdU8Hj/mcLmQDJ35WL9L2I4Nvb55xVHay5uU7seWB9mNHzc4i/K1ZKSl6k3yrxikSF/TaINLse+9h81F9kJHD9W5hnzZB/MVRwVScXUI19/OSGfqVZAXw8Eti58GEmTVgIqn+/Nvm6APlyYa8fEu5EQBwIpbBLGYTkvMvWxkzGqtQIhgkCrTXsFkYiKzGGrSmD2WMfTiW8NxcMDM/NEhSp2S25S7IxHuAED4RU/qN8rp6ig0tgx2f/vXtwAxcfJfhoGOVEJwmV4cjLWVTtEq5kjpCQj4otvtpNHKHhSDylks28krrNc=) 2026-01-30 02:54:32.153971 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJJISqcWxV3GTfcl9A/DW9lFvCNdEHMnrTLf+g6G8edwwBu8xDStq82Hl1Jkm+4MEC2zWF6SL5PN3N+vXd7SPg0=) 2026-01-30 02:54:32.153982 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUnUNEPwqqT+oO0oJsmmcPXOB6u3IkSnbzRQprgrZMn) 2026-01-30 02:54:32.154090 | orchestrator | 2026-01-30 02:54:32.154105 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-30 02:54:32.154117 | orchestrator | Friday 30 January 2026 02:54:31 +0000 (0:00:00.928) 0:00:25.951 ******** 2026-01-30 02:54:32.154130 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-30 02:54:32.154143 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-30 02:54:32.154155 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-30 02:54:32.154167 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-30 02:54:32.154179 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-30 02:54:32.154191 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-30 02:54:32.154203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-30 02:54:32.154217 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:54:32.154229 | orchestrator | 2026-01-30 02:54:32.154263 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-30 02:54:32.154283 | orchestrator | Friday 30 January 2026 02:54:31 +0000 (0:00:00.141) 0:00:26.093 ******** 2026-01-30 02:54:32.154296 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:54:32.154309 | orchestrator | 2026-01-30 02:54:32.154321 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-30 02:54:32.154334 | orchestrator | Friday 30 January 2026 02:54:31 +0000 (0:00:00.057) 0:00:26.150 ******** 2026-01-30 02:54:32.154347 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:54:32.154359 | orchestrator | 2026-01-30 02:54:32.154372 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-30 02:54:32.154384 | orchestrator | Friday 30 January 2026 02:54:31 +0000 (0:00:00.051) 0:00:26.201 ******** 2026-01-30 02:54:32.154397 | orchestrator | changed: [testbed-manager] 2026-01-30 02:54:32.154408 | orchestrator | 2026-01-30 02:54:32.154420 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:54:32.154433 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 02:54:32.154446 | orchestrator | 2026-01-30 02:54:32.154458 | orchestrator | 2026-01-30 02:54:32.154471 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:54:32.154483 | orchestrator | Friday 30 January 2026 02:54:32 +0000 (0:00:00.574) 0:00:26.776 ******** 2026-01-30 02:54:32.154567 | orchestrator | =============================================================================== 2026-01-30 02:54:32.154589 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.83s 2026-01-30 02:54:32.154602 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.09s 2026-01-30 02:54:32.154613 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.01s 2026-01-30 02:54:32.154624 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-01-30 02:54:32.154635 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-01-30 02:54:32.154646 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-01-30 02:54:32.154657 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-01-30 02:54:32.154667 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-30 02:54:32.154678 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-01-30 02:54:32.154689 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-01-30 02:54:32.154699 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-01-30 02:54:32.154710 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-01-30 02:54:32.154721 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-01-30 02:54:32.154741 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.90s 2026-01-30 02:54:32.154752 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.89s 2026-01-30 02:54:32.154762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.88s 2026-01-30 02:54:32.154773 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.57s 2026-01-30 02:54:32.154784 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2026-01-30 02:54:32.154795 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-30 02:54:32.154806 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.14s 2026-01-30 02:54:32.327178 | orchestrator | + osism apply squid 2026-01-30 02:54:44.398470 | orchestrator | 2026-01-30 02:54:44 | INFO  | Task 76a8da0f-0b89-4730-b1fe-5e5c8b36596b (squid) was prepared for execution. 2026-01-30 02:54:44.398759 | orchestrator | 2026-01-30 02:54:44 | INFO  | It takes a moment until task 76a8da0f-0b89-4730-b1fe-5e5c8b36596b (squid) has been started and output is visible here. 2026-01-30 02:56:39.279917 | orchestrator | 2026-01-30 02:56:39.280009 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-30 02:56:39.280020 | orchestrator | 2026-01-30 02:56:39.280028 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-30 02:56:39.280036 | orchestrator | Friday 30 January 2026 02:54:48 +0000 (0:00:00.116) 0:00:00.116 ******** 2026-01-30 02:56:39.280043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 02:56:39.280051 | orchestrator | 2026-01-30 02:56:39.280058 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-30 02:56:39.280065 | orchestrator | Friday 30 January 2026 02:54:48 +0000 (0:00:00.073) 0:00:00.190 ******** 2026-01-30 02:56:39.280071 | orchestrator | ok: [testbed-manager] 2026-01-30 02:56:39.280079 | orchestrator | 2026-01-30 02:56:39.280086 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-30 02:56:39.280093 | orchestrator | Friday 30 January 2026 02:54:49 +0000 (0:00:01.072) 0:00:01.262 ******** 2026-01-30 02:56:39.280100 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-30 02:56:39.280107 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-30 02:56:39.280114 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-30 02:56:39.280121 | orchestrator | 2026-01-30 02:56:39.280127 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-30 02:56:39.280134 | orchestrator | Friday 30 January 2026 02:54:50 +0000 (0:00:00.969) 0:00:02.232 ******** 2026-01-30 02:56:39.280141 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-30 02:56:39.280148 | orchestrator | 2026-01-30 02:56:39.280155 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-30 02:56:39.280161 | orchestrator | Friday 30 January 2026 02:54:51 +0000 (0:00:00.903) 0:00:03.135 ******** 2026-01-30 02:56:39.280168 | orchestrator | ok: [testbed-manager] 2026-01-30 02:56:39.280175 | orchestrator | 2026-01-30 02:56:39.280182 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-30 02:56:39.280189 | orchestrator | Friday 30 January 2026 02:54:51 +0000 (0:00:00.312) 0:00:03.448 ******** 2026-01-30 02:56:39.280196 | orchestrator | changed: [testbed-manager] 2026-01-30 02:56:39.280203 | orchestrator | 2026-01-30 02:56:39.280211 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-30 02:56:39.280227 | orchestrator | Friday 30 January 2026 02:54:52 +0000 (0:00:00.828) 0:00:04.277 ******** 2026-01-30 02:56:39.280234 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-30 02:56:39.280241 | orchestrator | ok: [testbed-manager] 2026-01-30 02:56:39.280269 | orchestrator | 2026-01-30 02:56:39.280277 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-30 02:56:39.280284 | orchestrator | Friday 30 January 2026 02:55:22 +0000 (0:00:30.304) 0:00:34.581 ******** 2026-01-30 02:56:39.280290 | orchestrator | changed: [testbed-manager] 2026-01-30 02:56:39.280297 | orchestrator | 2026-01-30 02:56:39.280304 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-30 02:56:39.280311 | orchestrator | Friday 30 January 2026 02:55:38 +0000 (0:00:15.659) 0:00:50.241 ******** 2026-01-30 02:56:39.280317 | orchestrator | Pausing for 60 seconds 2026-01-30 02:56:39.280324 | orchestrator | changed: [testbed-manager] 2026-01-30 02:56:39.280331 | orchestrator | 2026-01-30 02:56:39.280338 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-30 02:56:39.280345 | orchestrator | Friday 30 January 2026 02:56:38 +0000 (0:01:00.074) 0:01:50.316 ******** 2026-01-30 02:56:39.280351 | orchestrator | ok: [testbed-manager] 2026-01-30 02:56:39.280358 | orchestrator | 2026-01-30 02:56:39.280365 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-30 02:56:39.280372 | orchestrator | Friday 30 January 2026 02:56:38 +0000 (0:00:00.060) 0:01:50.376 ******** 2026-01-30 02:56:39.280378 | orchestrator | changed: [testbed-manager] 2026-01-30 02:56:39.280385 | orchestrator | 2026-01-30 02:56:39.280392 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:56:39.280399 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 02:56:39.280406 | orchestrator | 2026-01-30 02:56:39.280413 | orchestrator | 2026-01-30 02:56:39.280419 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:56:39.280426 | orchestrator | Friday 30 January 2026 02:56:39 +0000 (0:00:00.595) 0:01:50.971 ******** 2026-01-30 02:56:39.280433 | orchestrator | =============================================================================== 2026-01-30 02:56:39.280452 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-01-30 02:56:39.280459 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 30.30s 2026-01-30 02:56:39.280466 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.66s 2026-01-30 02:56:39.280473 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.07s 2026-01-30 02:56:39.280481 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.97s 2026-01-30 02:56:39.280489 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.90s 2026-01-30 02:56:39.280496 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.83s 2026-01-30 02:56:39.280504 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-01-30 02:56:39.280512 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-01-30 02:56:39.280519 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-01-30 02:56:39.280527 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-30 02:56:39.552128 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-30 02:56:39.552434 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-30 02:56:39.602252 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 02:56:39.602344 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-30 02:56:39.608967 | orchestrator | + set -e 2026-01-30 02:56:39.609287 | orchestrator | + NAMESPACE=kolla/release 2026-01-30 02:56:39.609312 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-30 02:56:39.615925 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-30 02:56:39.687173 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-30 02:56:39.688039 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-30 02:56:51.674300 | orchestrator | 2026-01-30 02:56:51 | INFO  | Task 8a343fea-6616-4e34-818a-38c8553cfeb9 (operator) was prepared for execution. 2026-01-30 02:56:51.674455 | orchestrator | 2026-01-30 02:56:51 | INFO  | It takes a moment until task 8a343fea-6616-4e34-818a-38c8553cfeb9 (operator) has been started and output is visible here. 2026-01-30 02:57:06.985664 | orchestrator | 2026-01-30 02:57:06.985818 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-30 02:57:06.985839 | orchestrator | 2026-01-30 02:57:06.985851 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 02:57:06.985863 | orchestrator | Friday 30 January 2026 02:56:55 +0000 (0:00:00.103) 0:00:00.103 ******** 2026-01-30 02:57:06.985875 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:57:06.985888 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:57:06.985900 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:57:06.985911 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:57:06.985923 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:57:06.985934 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:57:06.985945 | orchestrator | 2026-01-30 02:57:06.985973 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-30 02:57:06.985985 | orchestrator | Friday 30 January 2026 02:56:58 +0000 (0:00:03.244) 0:00:03.348 ******** 2026-01-30 02:57:06.985996 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:57:06.986007 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:57:06.986062 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:57:06.986076 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:57:06.986086 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:57:06.986097 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:57:06.986107 | orchestrator | 2026-01-30 02:57:06.986117 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-30 02:57:06.986128 | orchestrator | 2026-01-30 02:57:06.986139 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-30 02:57:06.986150 | orchestrator | Friday 30 January 2026 02:56:59 +0000 (0:00:00.778) 0:00:04.126 ******** 2026-01-30 02:57:06.986160 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:57:06.986170 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:57:06.986181 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:57:06.986192 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:57:06.986202 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:57:06.986213 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:57:06.986224 | orchestrator | 2026-01-30 02:57:06.986235 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-30 02:57:06.986246 | orchestrator | Friday 30 January 2026 02:56:59 +0000 (0:00:00.146) 0:00:04.273 ******** 2026-01-30 02:57:06.986258 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:57:06.986269 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:57:06.986281 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:57:06.986288 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:57:06.986296 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:57:06.986303 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:57:06.986310 | orchestrator | 2026-01-30 02:57:06.986318 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-30 02:57:06.986325 | orchestrator | Friday 30 January 2026 02:56:59 +0000 (0:00:00.149) 0:00:04.422 ******** 2026-01-30 02:57:06.986332 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:06.986340 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:06.986348 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:06.986355 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:06.986362 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:06.986369 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:06.986376 | orchestrator | 2026-01-30 02:57:06.986383 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-30 02:57:06.986390 | orchestrator | Friday 30 January 2026 02:57:00 +0000 (0:00:00.568) 0:00:04.991 ******** 2026-01-30 02:57:06.986398 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:06.986405 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:06.986412 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:06.986419 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:06.986445 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:06.986452 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:06.986459 | orchestrator | 2026-01-30 02:57:06.986467 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-30 02:57:06.986474 | orchestrator | Friday 30 January 2026 02:57:01 +0000 (0:00:00.787) 0:00:05.778 ******** 2026-01-30 02:57:06.986481 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-30 02:57:06.986487 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-30 02:57:06.986493 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-30 02:57:06.986499 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-30 02:57:06.986505 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-30 02:57:06.986511 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-30 02:57:06.986518 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-30 02:57:06.986524 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-30 02:57:06.986530 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-30 02:57:06.986536 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-30 02:57:06.986542 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-30 02:57:06.986550 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-30 02:57:06.986560 | orchestrator | 2026-01-30 02:57:06.986571 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-30 02:57:06.986580 | orchestrator | Friday 30 January 2026 02:57:02 +0000 (0:00:01.193) 0:00:06.972 ******** 2026-01-30 02:57:06.986590 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:06.986600 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:06.986610 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:06.986622 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:06.986633 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:06.986643 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:06.986651 | orchestrator | 2026-01-30 02:57:06.986657 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-30 02:57:06.986664 | orchestrator | Friday 30 January 2026 02:57:03 +0000 (0:00:01.276) 0:00:08.248 ******** 2026-01-30 02:57:06.986670 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-30 02:57:06.986676 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-30 02:57:06.986683 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-30 02:57:06.986689 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986709 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986716 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986722 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986783 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986790 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-30 02:57:06.986796 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986802 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986808 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986815 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986821 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986827 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-30 02:57:06.986834 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986840 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986846 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986860 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986866 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986872 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-30 02:57:06.986879 | orchestrator | 2026-01-30 02:57:06.986885 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-30 02:57:06.986892 | orchestrator | Friday 30 January 2026 02:57:04 +0000 (0:00:01.270) 0:00:09.518 ******** 2026-01-30 02:57:06.986898 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:06.986904 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:06.986911 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:06.986917 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:06.986923 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:06.986929 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:06.986935 | orchestrator | 2026-01-30 02:57:06.986942 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-30 02:57:06.986948 | orchestrator | Friday 30 January 2026 02:57:05 +0000 (0:00:00.161) 0:00:09.680 ******** 2026-01-30 02:57:06.986954 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:06.986960 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:06.986967 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:06.986973 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:06.986979 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:06.986985 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:06.986991 | orchestrator | 2026-01-30 02:57:06.986998 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-30 02:57:06.987004 | orchestrator | Friday 30 January 2026 02:57:05 +0000 (0:00:00.164) 0:00:09.845 ******** 2026-01-30 02:57:06.987010 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:06.987017 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:06.987023 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:06.987029 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:06.987035 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:06.987041 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:06.987047 | orchestrator | 2026-01-30 02:57:06.987054 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-30 02:57:06.987060 | orchestrator | Friday 30 January 2026 02:57:05 +0000 (0:00:00.572) 0:00:10.417 ******** 2026-01-30 02:57:06.987066 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:06.987072 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:06.987079 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:06.987085 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:06.987091 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:06.987105 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:06.987111 | orchestrator | 2026-01-30 02:57:06.987121 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-30 02:57:06.987131 | orchestrator | Friday 30 January 2026 02:57:06 +0000 (0:00:00.151) 0:00:10.569 ******** 2026-01-30 02:57:06.987141 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-30 02:57:06.987151 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 02:57:06.987162 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:06.987171 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:06.987181 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-30 02:57:06.987192 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:06.987202 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 02:57:06.987214 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 02:57:06.987223 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:06.987234 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:06.987241 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 02:57:06.987247 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:06.987253 | orchestrator | 2026-01-30 02:57:06.987265 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-30 02:57:06.987271 | orchestrator | Friday 30 January 2026 02:57:06 +0000 (0:00:00.684) 0:00:11.253 ******** 2026-01-30 02:57:06.987277 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:06.987284 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:06.987290 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:06.987296 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:06.987302 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:06.987308 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:06.987314 | orchestrator | 2026-01-30 02:57:06.987320 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-30 02:57:06.987326 | orchestrator | Friday 30 January 2026 02:57:06 +0000 (0:00:00.145) 0:00:11.399 ******** 2026-01-30 02:57:06.987333 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:06.987339 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:06.987345 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:06.987351 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:06.987363 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:08.187003 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:08.187108 | orchestrator | 2026-01-30 02:57:08.187129 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-30 02:57:08.187149 | orchestrator | Friday 30 January 2026 02:57:06 +0000 (0:00:00.129) 0:00:11.529 ******** 2026-01-30 02:57:08.187167 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:08.187185 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:08.187203 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:08.187223 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:08.187236 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:08.187245 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:08.187255 | orchestrator | 2026-01-30 02:57:08.187286 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-30 02:57:08.187296 | orchestrator | Friday 30 January 2026 02:57:07 +0000 (0:00:00.143) 0:00:11.672 ******** 2026-01-30 02:57:08.187306 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:57:08.187315 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:57:08.187325 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:57:08.187334 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:57:08.187344 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:57:08.187353 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:57:08.187363 | orchestrator | 2026-01-30 02:57:08.187373 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-30 02:57:08.187382 | orchestrator | Friday 30 January 2026 02:57:07 +0000 (0:00:00.633) 0:00:12.306 ******** 2026-01-30 02:57:08.187393 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:57:08.187403 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:57:08.187412 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:57:08.187421 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:57:08.187431 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:57:08.187441 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:57:08.187458 | orchestrator | 2026-01-30 02:57:08.187474 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:57:08.187492 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187510 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187526 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187543 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187589 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187607 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 02:57:08.187624 | orchestrator | 2026-01-30 02:57:08.187641 | orchestrator | 2026-01-30 02:57:08.187658 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:57:08.187670 | orchestrator | Friday 30 January 2026 02:57:07 +0000 (0:00:00.202) 0:00:12.509 ******** 2026-01-30 02:57:08.187682 | orchestrator | =============================================================================== 2026-01-30 02:57:08.187692 | orchestrator | Gathering Facts --------------------------------------------------------- 3.24s 2026-01-30 02:57:08.187703 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-01-30 02:57:08.187715 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-01-30 02:57:08.187767 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-01-30 02:57:08.187780 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-01-30 02:57:08.187791 | orchestrator | Do not require tty for all users ---------------------------------------- 0.78s 2026-01-30 02:57:08.187802 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2026-01-30 02:57:08.187813 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2026-01-30 02:57:08.187825 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-01-30 02:57:08.187836 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.57s 2026-01-30 02:57:08.187847 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2026-01-30 02:57:08.187857 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-01-30 02:57:08.187869 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-01-30 02:57:08.187880 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2026-01-30 02:57:08.187891 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.15s 2026-01-30 02:57:08.187903 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2026-01-30 02:57:08.187920 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-01-30 02:57:08.187938 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-01-30 02:57:08.187955 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-01-30 02:57:08.436797 | orchestrator | + osism apply --environment custom facts 2026-01-30 02:57:10.250109 | orchestrator | 2026-01-30 02:57:10 | INFO  | Trying to run play facts in environment custom 2026-01-30 02:57:20.495915 | orchestrator | 2026-01-30 02:57:20 | INFO  | Task 51cd5079-8212-4e36-a343-17b10b41f5f4 (facts) was prepared for execution. 2026-01-30 02:57:20.496038 | orchestrator | 2026-01-30 02:57:20 | INFO  | It takes a moment until task 51cd5079-8212-4e36-a343-17b10b41f5f4 (facts) has been started and output is visible here. 2026-01-30 02:58:03.399416 | orchestrator | 2026-01-30 02:58:03.399540 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-30 02:58:03.399557 | orchestrator | 2026-01-30 02:58:03.399570 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-30 02:58:03.399581 | orchestrator | Friday 30 January 2026 02:57:24 +0000 (0:00:00.080) 0:00:00.080 ******** 2026-01-30 02:58:03.399593 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:03.399606 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.399618 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:03.399629 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.399664 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:03.399676 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:03.399687 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.399698 | orchestrator | 2026-01-30 02:58:03.399710 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-30 02:58:03.399721 | orchestrator | Friday 30 January 2026 02:57:25 +0000 (0:00:01.347) 0:00:01.428 ******** 2026-01-30 02:58:03.399732 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:03.399743 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.399754 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.399784 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:03.399795 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:03.399851 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:03.399863 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.399874 | orchestrator | 2026-01-30 02:58:03.399885 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-30 02:58:03.399896 | orchestrator | 2026-01-30 02:58:03.399907 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-30 02:58:03.399918 | orchestrator | Friday 30 January 2026 02:57:26 +0000 (0:00:01.138) 0:00:02.566 ******** 2026-01-30 02:58:03.399929 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.399940 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.399951 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.399965 | orchestrator | 2026-01-30 02:58:03.399978 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-30 02:58:03.399992 | orchestrator | Friday 30 January 2026 02:57:27 +0000 (0:00:00.095) 0:00:02.661 ******** 2026-01-30 02:58:03.400004 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.400018 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.400031 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.400043 | orchestrator | 2026-01-30 02:58:03.400056 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-30 02:58:03.400069 | orchestrator | Friday 30 January 2026 02:57:27 +0000 (0:00:00.181) 0:00:02.843 ******** 2026-01-30 02:58:03.400081 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.400094 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.400127 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.400141 | orchestrator | 2026-01-30 02:58:03.400154 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-30 02:58:03.400167 | orchestrator | Friday 30 January 2026 02:57:27 +0000 (0:00:00.224) 0:00:03.067 ******** 2026-01-30 02:58:03.400181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 02:58:03.400195 | orchestrator | 2026-01-30 02:58:03.400207 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-30 02:58:03.400220 | orchestrator | Friday 30 January 2026 02:57:27 +0000 (0:00:00.112) 0:00:03.180 ******** 2026-01-30 02:58:03.400233 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.400246 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.400259 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.400272 | orchestrator | 2026-01-30 02:58:03.400285 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-30 02:58:03.400296 | orchestrator | Friday 30 January 2026 02:57:27 +0000 (0:00:00.424) 0:00:03.605 ******** 2026-01-30 02:58:03.400307 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:03.400318 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:03.400329 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:03.400340 | orchestrator | 2026-01-30 02:58:03.400351 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-30 02:58:03.400362 | orchestrator | Friday 30 January 2026 02:57:28 +0000 (0:00:00.130) 0:00:03.736 ******** 2026-01-30 02:58:03.400373 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.400384 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.400403 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.400414 | orchestrator | 2026-01-30 02:58:03.400425 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-30 02:58:03.400436 | orchestrator | Friday 30 January 2026 02:57:29 +0000 (0:00:01.047) 0:00:04.784 ******** 2026-01-30 02:58:03.400447 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.400458 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.400469 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.400480 | orchestrator | 2026-01-30 02:58:03.400491 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-30 02:58:03.400553 | orchestrator | Friday 30 January 2026 02:57:29 +0000 (0:00:00.450) 0:00:05.234 ******** 2026-01-30 02:58:03.400566 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.400576 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.400587 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.400599 | orchestrator | 2026-01-30 02:58:03.400609 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-30 02:58:03.400620 | orchestrator | Friday 30 January 2026 02:57:30 +0000 (0:00:01.044) 0:00:06.279 ******** 2026-01-30 02:58:03.400631 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.400642 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.400653 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.400664 | orchestrator | 2026-01-30 02:58:03.400675 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-30 02:58:03.400686 | orchestrator | Friday 30 January 2026 02:57:46 +0000 (0:00:16.172) 0:00:22.451 ******** 2026-01-30 02:58:03.400696 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:03.400707 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:03.400718 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:03.400729 | orchestrator | 2026-01-30 02:58:03.400740 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-30 02:58:03.400772 | orchestrator | Friday 30 January 2026 02:57:46 +0000 (0:00:00.108) 0:00:22.560 ******** 2026-01-30 02:58:03.400784 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:03.400795 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:03.400836 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:03.400847 | orchestrator | 2026-01-30 02:58:03.400858 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-30 02:58:03.400869 | orchestrator | Friday 30 January 2026 02:57:54 +0000 (0:00:07.669) 0:00:30.229 ******** 2026-01-30 02:58:03.400880 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.400891 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.400902 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.400913 | orchestrator | 2026-01-30 02:58:03.400924 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-30 02:58:03.400935 | orchestrator | Friday 30 January 2026 02:57:55 +0000 (0:00:00.433) 0:00:30.663 ******** 2026-01-30 02:58:03.400946 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-30 02:58:03.400957 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-30 02:58:03.400968 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-30 02:58:03.400979 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-30 02:58:03.400990 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-30 02:58:03.401001 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-30 02:58:03.401011 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-30 02:58:03.401022 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-30 02:58:03.401033 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-30 02:58:03.401057 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-30 02:58:03.401068 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-30 02:58:03.401090 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-30 02:58:03.401110 | orchestrator | 2026-01-30 02:58:03.401121 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-30 02:58:03.401132 | orchestrator | Friday 30 January 2026 02:57:58 +0000 (0:00:03.449) 0:00:34.113 ******** 2026-01-30 02:58:03.401143 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.401154 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.401164 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.401175 | orchestrator | 2026-01-30 02:58:03.401186 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 02:58:03.401197 | orchestrator | 2026-01-30 02:58:03.401208 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 02:58:03.401219 | orchestrator | Friday 30 January 2026 02:57:59 +0000 (0:00:01.250) 0:00:35.364 ******** 2026-01-30 02:58:03.401230 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:03.401241 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:03.401252 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:03.401263 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:03.401274 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:03.401285 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:03.401295 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:03.401306 | orchestrator | 2026-01-30 02:58:03.401317 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 02:58:03.401329 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 02:58:03.401340 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 02:58:03.401353 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 02:58:03.401364 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 02:58:03.401375 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 02:58:03.401387 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 02:58:03.401398 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 02:58:03.401408 | orchestrator | 2026-01-30 02:58:03.401419 | orchestrator | 2026-01-30 02:58:03.401430 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 02:58:03.401441 | orchestrator | Friday 30 January 2026 02:58:03 +0000 (0:00:03.661) 0:00:39.025 ******** 2026-01-30 02:58:03.401452 | orchestrator | =============================================================================== 2026-01-30 02:58:03.401463 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.17s 2026-01-30 02:58:03.401474 | orchestrator | Install required packages (Debian) -------------------------------------- 7.67s 2026-01-30 02:58:03.401485 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.66s 2026-01-30 02:58:03.401496 | orchestrator | Copy fact files --------------------------------------------------------- 3.45s 2026-01-30 02:58:03.401507 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-01-30 02:58:03.401517 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.25s 2026-01-30 02:58:03.401540 | orchestrator | Copy fact file ---------------------------------------------------------- 1.14s 2026-01-30 02:58:03.602358 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-01-30 02:58:03.602450 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2026-01-30 02:58:03.602487 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2026-01-30 02:58:03.602504 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2026-01-30 02:58:03.602510 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-01-30 02:58:03.602517 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-01-30 02:58:03.602523 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-01-30 02:58:03.602529 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2026-01-30 02:58:03.602535 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.11s 2026-01-30 02:58:03.602543 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-01-30 02:58:03.602549 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-01-30 02:58:03.878480 | orchestrator | + osism apply bootstrap 2026-01-30 02:58:16.054681 | orchestrator | 2026-01-30 02:58:16 | INFO  | Task dfecb6a5-0402-47ab-8aee-7d2bbf3646ad (bootstrap) was prepared for execution. 2026-01-30 02:58:16.054808 | orchestrator | 2026-01-30 02:58:16 | INFO  | It takes a moment until task dfecb6a5-0402-47ab-8aee-7d2bbf3646ad (bootstrap) has been started and output is visible here. 2026-01-30 02:58:31.257915 | orchestrator | 2026-01-30 02:58:31.258098 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-30 02:58:31.258133 | orchestrator | 2026-01-30 02:58:31.258152 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-30 02:58:31.258171 | orchestrator | Friday 30 January 2026 02:58:19 +0000 (0:00:00.110) 0:00:00.110 ******** 2026-01-30 02:58:31.258187 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:31.258207 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:31.258226 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:31.258242 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:31.258261 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:31.258279 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:31.258298 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:31.258317 | orchestrator | 2026-01-30 02:58:31.258335 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 02:58:31.258351 | orchestrator | 2026-01-30 02:58:31.258369 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 02:58:31.258388 | orchestrator | Friday 30 January 2026 02:58:19 +0000 (0:00:00.166) 0:00:00.276 ******** 2026-01-30 02:58:31.258406 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:31.258426 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:31.258445 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:31.258466 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:31.258484 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:31.258504 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:31.258523 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:31.258543 | orchestrator | 2026-01-30 02:58:31.258563 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-30 02:58:31.258583 | orchestrator | 2026-01-30 02:58:31.258604 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 02:58:31.258618 | orchestrator | Friday 30 January 2026 02:58:23 +0000 (0:00:03.690) 0:00:03.967 ******** 2026-01-30 02:58:31.258631 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-30 02:58:31.258644 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-30 02:58:31.258658 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-30 02:58:31.258671 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-30 02:58:31.258683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-30 02:58:31.258696 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-30 02:58:31.258708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 02:58:31.258747 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-30 02:58:31.258760 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 02:58:31.258773 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-30 02:58:31.258786 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 02:58:31.258798 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 02:58:31.258809 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-30 02:58:31.258821 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-30 02:58:31.258832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 02:58:31.258879 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:31.258890 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 02:58:31.258901 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 02:58:31.258912 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-30 02:58:31.258923 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 02:58:31.258933 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 02:58:31.258944 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 02:58:31.258955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 02:58:31.258965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 02:58:31.258976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 02:58:31.258986 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 02:58:31.258998 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-30 02:58:31.259009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 02:58:31.259020 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 02:58:31.259030 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 02:58:31.259041 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 02:58:31.259052 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:31.259063 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:31.259073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 02:58:31.259084 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 02:58:31.259095 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 02:58:31.259105 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:31.259116 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-30 02:58:31.259127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 02:58:31.259138 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 02:58:31.259148 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 02:58:31.259159 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 02:58:31.259169 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 02:58:31.259180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 02:58:31.259190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 02:58:31.259201 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 02:58:31.259233 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 02:58:31.259245 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:31.259256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 02:58:31.259267 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 02:58:31.259295 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 02:58:31.259307 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 02:58:31.259327 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:31.259338 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 02:58:31.259349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 02:58:31.259360 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:31.259370 | orchestrator | 2026-01-30 02:58:31.259381 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-30 02:58:31.259392 | orchestrator | 2026-01-30 02:58:31.259403 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-30 02:58:31.259414 | orchestrator | Friday 30 January 2026 02:58:24 +0000 (0:00:00.381) 0:00:04.348 ******** 2026-01-30 02:58:31.259425 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:31.259435 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:31.259446 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:31.259464 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:31.259487 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:31.259511 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:31.259529 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:31.259546 | orchestrator | 2026-01-30 02:58:31.259564 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-30 02:58:31.259582 | orchestrator | Friday 30 January 2026 02:58:25 +0000 (0:00:01.209) 0:00:05.558 ******** 2026-01-30 02:58:31.259599 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:31.259619 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:31.259637 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:31.259656 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:31.259674 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:31.259692 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:31.259722 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:31.259740 | orchestrator | 2026-01-30 02:58:31.259759 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-30 02:58:31.259776 | orchestrator | Friday 30 January 2026 02:58:26 +0000 (0:00:01.214) 0:00:06.773 ******** 2026-01-30 02:58:31.259795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:31.259816 | orchestrator | 2026-01-30 02:58:31.259880 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-30 02:58:31.259902 | orchestrator | Friday 30 January 2026 02:58:26 +0000 (0:00:00.297) 0:00:07.070 ******** 2026-01-30 02:58:31.259920 | orchestrator | changed: [testbed-manager] 2026-01-30 02:58:31.259936 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:31.259947 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:31.259957 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:31.259968 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:31.259978 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:31.259989 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:31.259999 | orchestrator | 2026-01-30 02:58:31.260010 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-30 02:58:31.260021 | orchestrator | Friday 30 January 2026 02:58:28 +0000 (0:00:02.110) 0:00:09.180 ******** 2026-01-30 02:58:31.260032 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:31.260044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:31.260058 | orchestrator | 2026-01-30 02:58:31.260068 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-30 02:58:31.260079 | orchestrator | Friday 30 January 2026 02:58:29 +0000 (0:00:00.249) 0:00:09.429 ******** 2026-01-30 02:58:31.260090 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:31.260101 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:31.260119 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:31.260130 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:31.260151 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:31.260161 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:31.260172 | orchestrator | 2026-01-30 02:58:31.260183 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-30 02:58:31.260193 | orchestrator | Friday 30 January 2026 02:58:30 +0000 (0:00:00.986) 0:00:10.415 ******** 2026-01-30 02:58:31.260204 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:31.260215 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:31.260225 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:31.260236 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:31.260247 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:31.260257 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:31.260268 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:31.260278 | orchestrator | 2026-01-30 02:58:31.260289 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-30 02:58:31.260300 | orchestrator | Friday 30 January 2026 02:58:30 +0000 (0:00:00.590) 0:00:11.006 ******** 2026-01-30 02:58:31.260311 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:31.260321 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:31.260332 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:31.260342 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:31.260353 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:31.260363 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:31.260374 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:31.260385 | orchestrator | 2026-01-30 02:58:31.260395 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-30 02:58:31.260407 | orchestrator | Friday 30 January 2026 02:58:31 +0000 (0:00:00.437) 0:00:11.444 ******** 2026-01-30 02:58:31.260418 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:31.260429 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:31.260451 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:43.235377 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:43.235497 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:43.235517 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:43.235531 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:43.235545 | orchestrator | 2026-01-30 02:58:43.235560 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-30 02:58:43.235575 | orchestrator | Friday 30 January 2026 02:58:31 +0000 (0:00:00.220) 0:00:11.664 ******** 2026-01-30 02:58:43.235590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:43.235622 | orchestrator | 2026-01-30 02:58:43.235636 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-30 02:58:43.235650 | orchestrator | Friday 30 January 2026 02:58:31 +0000 (0:00:00.295) 0:00:11.959 ******** 2026-01-30 02:58:43.235664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:43.235678 | orchestrator | 2026-01-30 02:58:43.235691 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-30 02:58:43.235705 | orchestrator | Friday 30 January 2026 02:58:31 +0000 (0:00:00.333) 0:00:12.292 ******** 2026-01-30 02:58:43.235718 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.235732 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.235746 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.235760 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.235773 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.235787 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.235800 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.235814 | orchestrator | 2026-01-30 02:58:43.235883 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-30 02:58:43.235900 | orchestrator | Friday 30 January 2026 02:58:33 +0000 (0:00:01.336) 0:00:13.629 ******** 2026-01-30 02:58:43.235914 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:43.235927 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:43.235941 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:43.235953 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:43.235966 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:43.235979 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:43.235993 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:43.236006 | orchestrator | 2026-01-30 02:58:43.236017 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-30 02:58:43.236030 | orchestrator | Friday 30 January 2026 02:58:33 +0000 (0:00:00.337) 0:00:13.966 ******** 2026-01-30 02:58:43.236043 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.236055 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.236069 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.236082 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.236096 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.236108 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.236121 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.236135 | orchestrator | 2026-01-30 02:58:43.236149 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-30 02:58:43.236162 | orchestrator | Friday 30 January 2026 02:58:34 +0000 (0:00:00.533) 0:00:14.500 ******** 2026-01-30 02:58:43.236175 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:43.236190 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:43.236204 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:43.236218 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:43.236233 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:43.236245 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:43.236258 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:43.236272 | orchestrator | 2026-01-30 02:58:43.236285 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-30 02:58:43.236299 | orchestrator | Friday 30 January 2026 02:58:34 +0000 (0:00:00.247) 0:00:14.747 ******** 2026-01-30 02:58:43.236324 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:43.236338 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.236351 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:43.236364 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:43.236377 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:43.236391 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:43.236405 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:43.236418 | orchestrator | 2026-01-30 02:58:43.236432 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-30 02:58:43.236446 | orchestrator | Friday 30 January 2026 02:58:35 +0000 (0:00:00.634) 0:00:15.382 ******** 2026-01-30 02:58:43.236459 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.236472 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:43.236485 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:43.236497 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:43.236511 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:43.236523 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:43.236537 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:43.236550 | orchestrator | 2026-01-30 02:58:43.236563 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-30 02:58:43.236576 | orchestrator | Friday 30 January 2026 02:58:36 +0000 (0:00:01.113) 0:00:16.495 ******** 2026-01-30 02:58:43.236588 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.236602 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.236615 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.236628 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.236640 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.236665 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.236678 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.236691 | orchestrator | 2026-01-30 02:58:43.236704 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-30 02:58:43.236718 | orchestrator | Friday 30 January 2026 02:58:37 +0000 (0:00:01.005) 0:00:17.501 ******** 2026-01-30 02:58:43.236755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:43.236770 | orchestrator | 2026-01-30 02:58:43.236783 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-30 02:58:43.236797 | orchestrator | Friday 30 January 2026 02:58:37 +0000 (0:00:00.286) 0:00:17.787 ******** 2026-01-30 02:58:43.236810 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:43.236822 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:58:43.236834 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:43.236846 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:58:43.236887 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:58:43.236900 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:43.236913 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:43.236927 | orchestrator | 2026-01-30 02:58:43.236938 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-30 02:58:43.236949 | orchestrator | Friday 30 January 2026 02:58:38 +0000 (0:00:01.284) 0:00:19.072 ******** 2026-01-30 02:58:43.236961 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.236973 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.236986 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.236999 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237012 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.237026 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.237039 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.237051 | orchestrator | 2026-01-30 02:58:43.237064 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-30 02:58:43.237077 | orchestrator | Friday 30 January 2026 02:58:39 +0000 (0:00:00.277) 0:00:19.350 ******** 2026-01-30 02:58:43.237089 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.237102 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.237115 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.237127 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237138 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.237151 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.237163 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.237174 | orchestrator | 2026-01-30 02:58:43.237187 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-30 02:58:43.237201 | orchestrator | Friday 30 January 2026 02:58:39 +0000 (0:00:00.262) 0:00:19.612 ******** 2026-01-30 02:58:43.237214 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.237229 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.237242 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.237254 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237266 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.237277 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.237289 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.237302 | orchestrator | 2026-01-30 02:58:43.237315 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-30 02:58:43.237329 | orchestrator | Friday 30 January 2026 02:58:39 +0000 (0:00:00.234) 0:00:19.847 ******** 2026-01-30 02:58:43.237344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:58:43.237359 | orchestrator | 2026-01-30 02:58:43.237371 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-30 02:58:43.237384 | orchestrator | Friday 30 January 2026 02:58:39 +0000 (0:00:00.254) 0:00:20.101 ******** 2026-01-30 02:58:43.237411 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.237425 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.237439 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.237451 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237462 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.237473 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.237485 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.237497 | orchestrator | 2026-01-30 02:58:43.237511 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-30 02:58:43.237524 | orchestrator | Friday 30 January 2026 02:58:40 +0000 (0:00:00.536) 0:00:20.638 ******** 2026-01-30 02:58:43.237538 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:58:43.237551 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:58:43.237564 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:58:43.237577 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:58:43.237591 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:58:43.237603 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:58:43.237617 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:58:43.237630 | orchestrator | 2026-01-30 02:58:43.237644 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-30 02:58:43.237657 | orchestrator | Friday 30 January 2026 02:58:40 +0000 (0:00:00.269) 0:00:20.907 ******** 2026-01-30 02:58:43.237671 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.237685 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.237699 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237712 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.237726 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:58:43.237739 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:58:43.237753 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:58:43.237767 | orchestrator | 2026-01-30 02:58:43.237779 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-30 02:58:43.237792 | orchestrator | Friday 30 January 2026 02:58:41 +0000 (0:00:01.005) 0:00:21.912 ******** 2026-01-30 02:58:43.237804 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.237818 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.237831 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.237846 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.237916 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:58:43.237930 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:58:43.237956 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:58:43.237970 | orchestrator | 2026-01-30 02:58:43.237982 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-30 02:58:43.237996 | orchestrator | Friday 30 January 2026 02:58:42 +0000 (0:00:00.537) 0:00:22.450 ******** 2026-01-30 02:58:43.238008 | orchestrator | ok: [testbed-manager] 2026-01-30 02:58:43.238090 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:58:43.238106 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:58:43.238119 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:58:43.238150 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.252138 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.252286 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.252314 | orchestrator | 2026-01-30 02:59:23.252336 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-30 02:59:23.252358 | orchestrator | Friday 30 January 2026 02:58:43 +0000 (0:00:01.100) 0:00:23.551 ******** 2026-01-30 02:59:23.252378 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.252398 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.252418 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.252438 | orchestrator | changed: [testbed-manager] 2026-01-30 02:59:23.252458 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.252476 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.252493 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.252512 | orchestrator | 2026-01-30 02:59:23.252531 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-30 02:59:23.252579 | orchestrator | Friday 30 January 2026 02:58:59 +0000 (0:00:16.776) 0:00:40.327 ******** 2026-01-30 02:59:23.252599 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.252618 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.252637 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.252655 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.252673 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.252691 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.252711 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.252723 | orchestrator | 2026-01-30 02:59:23.252734 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-30 02:59:23.252745 | orchestrator | Friday 30 January 2026 02:59:00 +0000 (0:00:00.214) 0:00:40.542 ******** 2026-01-30 02:59:23.252756 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.252767 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.252778 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.252788 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.252799 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.252810 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.252821 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.252832 | orchestrator | 2026-01-30 02:59:23.252842 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-30 02:59:23.252853 | orchestrator | Friday 30 January 2026 02:59:00 +0000 (0:00:00.258) 0:00:40.800 ******** 2026-01-30 02:59:23.252864 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.252875 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.252885 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.252933 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.252947 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.252958 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.252969 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.252980 | orchestrator | 2026-01-30 02:59:23.252990 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-30 02:59:23.253002 | orchestrator | Friday 30 January 2026 02:59:00 +0000 (0:00:00.264) 0:00:41.065 ******** 2026-01-30 02:59:23.253014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:59:23.253028 | orchestrator | 2026-01-30 02:59:23.253039 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-30 02:59:23.253050 | orchestrator | Friday 30 January 2026 02:59:01 +0000 (0:00:00.318) 0:00:41.384 ******** 2026-01-30 02:59:23.253060 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.253071 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.253082 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.253093 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.253103 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.253114 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.253125 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.253136 | orchestrator | 2026-01-30 02:59:23.253147 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-30 02:59:23.253158 | orchestrator | Friday 30 January 2026 02:59:02 +0000 (0:00:01.757) 0:00:43.141 ******** 2026-01-30 02:59:23.253169 | orchestrator | changed: [testbed-manager] 2026-01-30 02:59:23.253179 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:59:23.253190 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:59:23.253201 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:59:23.253223 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.253234 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.253245 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.253255 | orchestrator | 2026-01-30 02:59:23.253266 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-30 02:59:23.253277 | orchestrator | Friday 30 January 2026 02:59:03 +0000 (0:00:01.077) 0:00:44.219 ******** 2026-01-30 02:59:23.253298 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.253309 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.253320 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.253330 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.253341 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.253352 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.253363 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.253373 | orchestrator | 2026-01-30 02:59:23.253384 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-30 02:59:23.253395 | orchestrator | Friday 30 January 2026 02:59:04 +0000 (0:00:00.798) 0:00:45.017 ******** 2026-01-30 02:59:23.253407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:59:23.253419 | orchestrator | 2026-01-30 02:59:23.253430 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-30 02:59:23.253443 | orchestrator | Friday 30 January 2026 02:59:05 +0000 (0:00:00.382) 0:00:45.400 ******** 2026-01-30 02:59:23.253453 | orchestrator | changed: [testbed-manager] 2026-01-30 02:59:23.253464 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:59:23.253475 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:59:23.253485 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:59:23.253496 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.253507 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.253518 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.253529 | orchestrator | 2026-01-30 02:59:23.253560 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-30 02:59:23.253573 | orchestrator | Friday 30 January 2026 02:59:06 +0000 (0:00:01.019) 0:00:46.419 ******** 2026-01-30 02:59:23.253583 | orchestrator | skipping: [testbed-manager] 2026-01-30 02:59:23.253594 | orchestrator | skipping: [testbed-node-3] 2026-01-30 02:59:23.253605 | orchestrator | skipping: [testbed-node-4] 2026-01-30 02:59:23.253616 | orchestrator | skipping: [testbed-node-5] 2026-01-30 02:59:23.253626 | orchestrator | skipping: [testbed-node-0] 2026-01-30 02:59:23.253637 | orchestrator | skipping: [testbed-node-1] 2026-01-30 02:59:23.253648 | orchestrator | skipping: [testbed-node-2] 2026-01-30 02:59:23.253658 | orchestrator | 2026-01-30 02:59:23.253669 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-30 02:59:23.253680 | orchestrator | Friday 30 January 2026 02:59:06 +0000 (0:00:00.267) 0:00:46.686 ******** 2026-01-30 02:59:23.253692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:59:23.253703 | orchestrator | 2026-01-30 02:59:23.253714 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-30 02:59:23.253725 | orchestrator | Friday 30 January 2026 02:59:06 +0000 (0:00:00.274) 0:00:46.961 ******** 2026-01-30 02:59:23.253736 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.253747 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.253758 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.253768 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.253779 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.253790 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.253800 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.253811 | orchestrator | 2026-01-30 02:59:23.253822 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-30 02:59:23.253833 | orchestrator | Friday 30 January 2026 02:59:08 +0000 (0:00:01.675) 0:00:48.637 ******** 2026-01-30 02:59:23.253844 | orchestrator | changed: [testbed-manager] 2026-01-30 02:59:23.253855 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:59:23.253865 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:59:23.253876 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:59:23.253887 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.253925 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.253937 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.253948 | orchestrator | 2026-01-30 02:59:23.253959 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-30 02:59:23.253970 | orchestrator | Friday 30 January 2026 02:59:09 +0000 (0:00:01.144) 0:00:49.781 ******** 2026-01-30 02:59:23.253981 | orchestrator | changed: [testbed-node-5] 2026-01-30 02:59:23.253992 | orchestrator | changed: [testbed-node-2] 2026-01-30 02:59:23.254002 | orchestrator | changed: [testbed-node-3] 2026-01-30 02:59:23.254013 | orchestrator | changed: [testbed-node-1] 2026-01-30 02:59:23.254100 | orchestrator | changed: [testbed-node-4] 2026-01-30 02:59:23.254112 | orchestrator | changed: [testbed-node-0] 2026-01-30 02:59:23.254123 | orchestrator | changed: [testbed-manager] 2026-01-30 02:59:23.254133 | orchestrator | 2026-01-30 02:59:23.254144 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-30 02:59:23.254155 | orchestrator | Friday 30 January 2026 02:59:20 +0000 (0:00:11.114) 0:01:00.896 ******** 2026-01-30 02:59:23.254166 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.254177 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.254188 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.254199 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.254209 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.254220 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.254230 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.254241 | orchestrator | 2026-01-30 02:59:23.254252 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-30 02:59:23.254263 | orchestrator | Friday 30 January 2026 02:59:21 +0000 (0:00:01.094) 0:01:01.991 ******** 2026-01-30 02:59:23.254274 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.254284 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.254295 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.254306 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.254316 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.254327 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.254344 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.254355 | orchestrator | 2026-01-30 02:59:23.254366 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-30 02:59:23.254377 | orchestrator | Friday 30 January 2026 02:59:22 +0000 (0:00:00.876) 0:01:02.867 ******** 2026-01-30 02:59:23.254388 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.254398 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.254409 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.254419 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.254430 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.254441 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.254451 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.254462 | orchestrator | 2026-01-30 02:59:23.254473 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-30 02:59:23.254484 | orchestrator | Friday 30 January 2026 02:59:22 +0000 (0:00:00.222) 0:01:03.089 ******** 2026-01-30 02:59:23.254495 | orchestrator | ok: [testbed-manager] 2026-01-30 02:59:23.254506 | orchestrator | ok: [testbed-node-3] 2026-01-30 02:59:23.254516 | orchestrator | ok: [testbed-node-4] 2026-01-30 02:59:23.254527 | orchestrator | ok: [testbed-node-5] 2026-01-30 02:59:23.254538 | orchestrator | ok: [testbed-node-0] 2026-01-30 02:59:23.254548 | orchestrator | ok: [testbed-node-1] 2026-01-30 02:59:23.254559 | orchestrator | ok: [testbed-node-2] 2026-01-30 02:59:23.254569 | orchestrator | 2026-01-30 02:59:23.254580 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-30 02:59:23.254591 | orchestrator | Friday 30 January 2026 02:59:22 +0000 (0:00:00.209) 0:01:03.299 ******** 2026-01-30 02:59:23.254603 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 02:59:23.254622 | orchestrator | 2026-01-30 02:59:23.254642 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-30 03:01:33.720726 | orchestrator | Friday 30 January 2026 02:59:23 +0000 (0:00:00.269) 0:01:03.568 ******** 2026-01-30 03:01:33.720872 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.720901 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.720922 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.720941 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.720960 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.720978 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.720997 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.721016 | orchestrator | 2026-01-30 03:01:33.721037 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-30 03:01:33.721135 | orchestrator | Friday 30 January 2026 02:59:24 +0000 (0:00:01.603) 0:01:05.172 ******** 2026-01-30 03:01:33.721155 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:33.721175 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:33.721194 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:33.721212 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:33.721230 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:33.721248 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:33.721267 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:33.721286 | orchestrator | 2026-01-30 03:01:33.721305 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-30 03:01:33.721324 | orchestrator | Friday 30 January 2026 02:59:25 +0000 (0:00:00.579) 0:01:05.751 ******** 2026-01-30 03:01:33.721343 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.721361 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.721380 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.721399 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.721417 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.721434 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.721453 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.721473 | orchestrator | 2026-01-30 03:01:33.721492 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-30 03:01:33.721511 | orchestrator | Friday 30 January 2026 02:59:25 +0000 (0:00:00.206) 0:01:05.958 ******** 2026-01-30 03:01:33.721529 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.721547 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.721565 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.721584 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.721602 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.721621 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.721633 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.721644 | orchestrator | 2026-01-30 03:01:33.721655 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-30 03:01:33.721666 | orchestrator | Friday 30 January 2026 02:59:26 +0000 (0:00:01.196) 0:01:07.155 ******** 2026-01-30 03:01:33.721677 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:33.721689 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:33.721708 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:33.721727 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:33.721743 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:33.721754 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:33.721769 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:33.721780 | orchestrator | 2026-01-30 03:01:33.721795 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-30 03:01:33.721813 | orchestrator | Friday 30 January 2026 02:59:28 +0000 (0:00:01.673) 0:01:08.828 ******** 2026-01-30 03:01:33.721831 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.721843 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.721853 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.721864 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.721875 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.721886 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.721921 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.721933 | orchestrator | 2026-01-30 03:01:33.721944 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-30 03:01:33.721954 | orchestrator | Friday 30 January 2026 02:59:30 +0000 (0:00:02.380) 0:01:11.209 ******** 2026-01-30 03:01:33.721965 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.721977 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.721995 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.722012 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.722134 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.722145 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.722156 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.722167 | orchestrator | 2026-01-30 03:01:33.722178 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-30 03:01:33.722189 | orchestrator | Friday 30 January 2026 03:00:03 +0000 (0:00:32.190) 0:01:43.399 ******** 2026-01-30 03:01:33.722200 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:33.722211 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:33.722221 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:33.722232 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:33.722243 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:33.722253 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:33.722264 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:33.722275 | orchestrator | 2026-01-30 03:01:33.722286 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-30 03:01:33.722297 | orchestrator | Friday 30 January 2026 03:01:20 +0000 (0:01:17.921) 0:03:01.321 ******** 2026-01-30 03:01:33.722308 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:33.722318 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.722329 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.722340 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.722350 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.722361 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.722372 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.722382 | orchestrator | 2026-01-30 03:01:33.722393 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-30 03:01:33.722404 | orchestrator | Friday 30 January 2026 03:01:22 +0000 (0:00:01.712) 0:03:03.033 ******** 2026-01-30 03:01:33.722415 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:33.722425 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:33.722436 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:33.722447 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:33.722457 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:33.722468 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:33.722478 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:33.722489 | orchestrator | 2026-01-30 03:01:33.722500 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-30 03:01:33.722510 | orchestrator | Friday 30 January 2026 03:01:32 +0000 (0:00:09.966) 0:03:13.000 ******** 2026-01-30 03:01:33.722559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-30 03:01:33.722595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-30 03:01:33.722622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-30 03:01:33.722635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-30 03:01:33.722647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-30 03:01:33.722658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-30 03:01:33.722669 | orchestrator | 2026-01-30 03:01:33.722681 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-30 03:01:33.722692 | orchestrator | Friday 30 January 2026 03:01:32 +0000 (0:00:00.304) 0:03:13.305 ******** 2026-01-30 03:01:33.722703 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-30 03:01:33.722714 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:33.722730 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-30 03:01:33.722742 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-30 03:01:33.722753 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:33.722764 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-30 03:01:33.722775 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:33.722786 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:33.722797 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:01:33.722808 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:01:33.722819 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:01:33.722830 | orchestrator | 2026-01-30 03:01:33.722841 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-30 03:01:33.722852 | orchestrator | Friday 30 January 2026 03:01:33 +0000 (0:00:00.643) 0:03:13.949 ******** 2026-01-30 03:01:33.722863 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-30 03:01:33.722875 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-30 03:01:33.722886 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-30 03:01:33.722897 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-30 03:01:33.722908 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-30 03:01:33.722926 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-30 03:01:39.439859 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-30 03:01:39.439982 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-30 03:01:39.439995 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-30 03:01:39.440004 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-30 03:01:39.440013 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-30 03:01:39.440022 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-30 03:01:39.440030 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-30 03:01:39.440039 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-30 03:01:39.440098 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-30 03:01:39.440115 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-30 03:01:39.440135 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-30 03:01:39.440155 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-30 03:01:39.440172 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-30 03:01:39.440187 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-30 03:01:39.440201 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-30 03:01:39.440216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-30 03:01:39.440230 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:39.440246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-30 03:01:39.440260 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-30 03:01:39.440273 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-30 03:01:39.440288 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-30 03:01:39.440303 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-30 03:01:39.440318 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-30 03:01:39.440333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-30 03:01:39.440348 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-30 03:01:39.440363 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-30 03:01:39.440377 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-30 03:01:39.440409 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-30 03:01:39.440424 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-30 03:01:39.440435 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:39.440446 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-30 03:01:39.440456 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-30 03:01:39.440467 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-30 03:01:39.440477 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-30 03:01:39.440497 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-30 03:01:39.440507 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-30 03:01:39.440517 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:39.440527 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:39.440537 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-30 03:01:39.440547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-30 03:01:39.440557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-30 03:01:39.440567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-30 03:01:39.440577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-30 03:01:39.440604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-30 03:01:39.440615 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-30 03:01:39.440625 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-30 03:01:39.440634 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-30 03:01:39.440644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440654 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440664 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440674 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440683 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440692 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-30 03:01:39.440700 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-30 03:01:39.440709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-30 03:01:39.440717 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-30 03:01:39.440726 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-30 03:01:39.440735 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-30 03:01:39.440743 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-30 03:01:39.440752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-30 03:01:39.440760 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-30 03:01:39.440769 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-30 03:01:39.440777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-30 03:01:39.440786 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-30 03:01:39.440794 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-30 03:01:39.440804 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-30 03:01:39.440812 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-30 03:01:39.440830 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-30 03:01:39.440839 | orchestrator | 2026-01-30 03:01:39.440847 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-30 03:01:39.440856 | orchestrator | Friday 30 January 2026 03:01:38 +0000 (0:00:04.760) 0:03:18.709 ******** 2026-01-30 03:01:39.440865 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440874 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440887 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440895 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440904 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440912 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440921 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-30 03:01:39.440930 | orchestrator | 2026-01-30 03:01:39.440938 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-30 03:01:39.440947 | orchestrator | Friday 30 January 2026 03:01:38 +0000 (0:00:00.559) 0:03:19.269 ******** 2026-01-30 03:01:39.440955 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:39.440964 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:39.440973 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:39.440982 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:01:39.440990 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:39.440999 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:01:39.441007 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:39.441016 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:01:39.441025 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:39.441034 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:39.441073 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:51.916794 | orchestrator | 2026-01-30 03:01:51.916911 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-30 03:01:51.916928 | orchestrator | Friday 30 January 2026 03:01:39 +0000 (0:00:00.485) 0:03:19.754 ******** 2026-01-30 03:01:51.916939 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:51.916952 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:51.916965 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:51.916976 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:51.916987 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:51.916998 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:51.917009 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-30 03:01:51.917020 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:51.917031 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:51.917041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:51.917052 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-30 03:01:51.917124 | orchestrator | 2026-01-30 03:01:51.917138 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-30 03:01:51.917149 | orchestrator | Friday 30 January 2026 03:01:39 +0000 (0:00:00.526) 0:03:20.281 ******** 2026-01-30 03:01:51.917159 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-30 03:01:51.917171 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:51.917182 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-30 03:01:51.917193 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-30 03:01:51.917204 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:01:51.917214 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:01:51.917225 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-30 03:01:51.917240 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:01:51.917260 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-30 03:01:51.917288 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-30 03:01:51.917309 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-30 03:01:51.917328 | orchestrator | 2026-01-30 03:01:51.917347 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-30 03:01:51.917366 | orchestrator | Friday 30 January 2026 03:01:40 +0000 (0:00:00.550) 0:03:20.832 ******** 2026-01-30 03:01:51.917384 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:51.917403 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:51.917421 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:51.917439 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:51.917457 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:01:51.917478 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:01:51.917496 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:01:51.917515 | orchestrator | 2026-01-30 03:01:51.917534 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-30 03:01:51.917555 | orchestrator | Friday 30 January 2026 03:01:40 +0000 (0:00:00.227) 0:03:21.060 ******** 2026-01-30 03:01:51.917612 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:51.917634 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:51.917653 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:51.917672 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:51.917691 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:51.917710 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:51.917728 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:51.917748 | orchestrator | 2026-01-30 03:01:51.917760 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-30 03:01:51.917771 | orchestrator | Friday 30 January 2026 03:01:46 +0000 (0:00:05.442) 0:03:26.502 ******** 2026-01-30 03:01:51.917782 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-30 03:01:51.917793 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:51.917804 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-30 03:01:51.917815 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-30 03:01:51.917826 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:51.917836 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-30 03:01:51.917848 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:51.917859 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-30 03:01:51.917870 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:51.917880 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:01:51.917929 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-30 03:01:51.917953 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:01:51.917977 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-30 03:01:51.917988 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:01:51.917999 | orchestrator | 2026-01-30 03:01:51.918011 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-30 03:01:51.918122 | orchestrator | Friday 30 January 2026 03:01:46 +0000 (0:00:00.304) 0:03:26.806 ******** 2026-01-30 03:01:51.918134 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-30 03:01:51.918146 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-30 03:01:51.918157 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-30 03:01:51.918190 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-30 03:01:51.918202 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-30 03:01:51.918213 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-30 03:01:51.918224 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-30 03:01:51.918235 | orchestrator | 2026-01-30 03:01:51.918246 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-30 03:01:51.918257 | orchestrator | Friday 30 January 2026 03:01:47 +0000 (0:00:01.091) 0:03:27.897 ******** 2026-01-30 03:01:51.918271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:01:51.918284 | orchestrator | 2026-01-30 03:01:51.918295 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-30 03:01:51.918306 | orchestrator | Friday 30 January 2026 03:01:47 +0000 (0:00:00.402) 0:03:28.300 ******** 2026-01-30 03:01:51.918317 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:51.918328 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:51.918339 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:51.918350 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:51.918361 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:51.918372 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:51.918382 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:51.918393 | orchestrator | 2026-01-30 03:01:51.918404 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-30 03:01:51.918415 | orchestrator | Friday 30 January 2026 03:01:49 +0000 (0:00:01.196) 0:03:29.496 ******** 2026-01-30 03:01:51.918426 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:51.918437 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:51.918448 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:51.918459 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:51.918469 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:51.918480 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:51.918491 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:51.918502 | orchestrator | 2026-01-30 03:01:51.918513 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-30 03:01:51.918524 | orchestrator | Friday 30 January 2026 03:01:49 +0000 (0:00:00.628) 0:03:30.125 ******** 2026-01-30 03:01:51.918535 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:51.918550 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:51.918570 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:51.918593 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:51.918621 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:51.918640 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:51.918659 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:51.918677 | orchestrator | 2026-01-30 03:01:51.918697 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-30 03:01:51.918717 | orchestrator | Friday 30 January 2026 03:01:50 +0000 (0:00:00.605) 0:03:30.730 ******** 2026-01-30 03:01:51.918737 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:51.918757 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:51.918776 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:51.918787 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:51.918798 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:51.918808 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:51.918831 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:51.918842 | orchestrator | 2026-01-30 03:01:51.918853 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-30 03:01:51.918864 | orchestrator | Friday 30 January 2026 03:01:50 +0000 (0:00:00.564) 0:03:31.295 ******** 2026-01-30 03:01:51.918887 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740743.6770837, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:51.918903 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740762.025606, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:51.918915 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740779.6567423, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:51.918952 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740768.2149854, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738787 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740784.8918812, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738897 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740771.106288, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738913 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769740763.4222398, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738967 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738980 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.738992 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.739004 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.739044 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.739057 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.739119 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 03:01:56.739140 | orchestrator | 2026-01-30 03:01:56.739153 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-30 03:01:56.739166 | orchestrator | Friday 30 January 2026 03:01:51 +0000 (0:00:00.937) 0:03:32.232 ******** 2026-01-30 03:01:56.739177 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:56.739189 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:56.739201 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:56.739212 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:56.739222 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:56.739233 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:56.739243 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:56.739254 | orchestrator | 2026-01-30 03:01:56.739265 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-30 03:01:56.739276 | orchestrator | Friday 30 January 2026 03:01:53 +0000 (0:00:01.098) 0:03:33.331 ******** 2026-01-30 03:01:56.739288 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:56.739301 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:56.739319 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:56.739332 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:56.739346 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:56.739359 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:56.739371 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:56.739381 | orchestrator | 2026-01-30 03:01:56.739392 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-30 03:01:56.739403 | orchestrator | Friday 30 January 2026 03:01:54 +0000 (0:00:01.177) 0:03:34.509 ******** 2026-01-30 03:01:56.739414 | orchestrator | changed: [testbed-manager] 2026-01-30 03:01:56.739425 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:01:56.739435 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:01:56.739446 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:01:56.739456 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:01:56.739467 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:01:56.739477 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:01:56.739488 | orchestrator | 2026-01-30 03:01:56.739499 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-30 03:01:56.739509 | orchestrator | Friday 30 January 2026 03:01:55 +0000 (0:00:01.108) 0:03:35.617 ******** 2026-01-30 03:01:56.739520 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:01:56.739531 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:01:56.739541 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:01:56.739552 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:01:56.739562 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:01:56.739573 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:01:56.739584 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:01:56.739594 | orchestrator | 2026-01-30 03:01:56.739605 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-30 03:01:56.739616 | orchestrator | Friday 30 January 2026 03:01:55 +0000 (0:00:00.297) 0:03:35.914 ******** 2026-01-30 03:01:56.739627 | orchestrator | ok: [testbed-manager] 2026-01-30 03:01:56.739639 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:01:56.739649 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:01:56.739660 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:01:56.739670 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:01:56.739681 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:01:56.739691 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:01:56.739702 | orchestrator | 2026-01-30 03:01:56.739713 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-30 03:01:56.739723 | orchestrator | Friday 30 January 2026 03:01:56 +0000 (0:00:00.735) 0:03:36.650 ******** 2026-01-30 03:01:56.739742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:01:56.739755 | orchestrator | 2026-01-30 03:01:56.739766 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-30 03:01:56.739785 | orchestrator | Friday 30 January 2026 03:01:56 +0000 (0:00:00.406) 0:03:37.057 ******** 2026-01-30 03:03:12.571358 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.571497 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:12.571515 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:12.571527 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:12.571538 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:12.571549 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:12.571561 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:12.572362 | orchestrator | 2026-01-30 03:03:12.572405 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-30 03:03:12.572424 | orchestrator | Friday 30 January 2026 03:02:04 +0000 (0:00:08.164) 0:03:45.221 ******** 2026-01-30 03:03:12.572442 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.572459 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.572477 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.572494 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.572510 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.572527 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.572545 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.572563 | orchestrator | 2026-01-30 03:03:12.572583 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-30 03:03:12.572603 | orchestrator | Friday 30 January 2026 03:02:06 +0000 (0:00:01.244) 0:03:46.466 ******** 2026-01-30 03:03:12.572623 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.572636 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.572648 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.572658 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.572670 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.572680 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.572691 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.572702 | orchestrator | 2026-01-30 03:03:12.572713 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-30 03:03:12.572724 | orchestrator | Friday 30 January 2026 03:02:07 +0000 (0:00:01.071) 0:03:47.537 ******** 2026-01-30 03:03:12.572735 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.572746 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.572758 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.572769 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.572779 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.572790 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.572801 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.572812 | orchestrator | 2026-01-30 03:03:12.572823 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-30 03:03:12.572835 | orchestrator | Friday 30 January 2026 03:02:07 +0000 (0:00:00.273) 0:03:47.811 ******** 2026-01-30 03:03:12.572846 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.572857 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.572868 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.572878 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.572889 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.572900 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.572911 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.572922 | orchestrator | 2026-01-30 03:03:12.572933 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-30 03:03:12.572944 | orchestrator | Friday 30 January 2026 03:02:07 +0000 (0:00:00.286) 0:03:48.098 ******** 2026-01-30 03:03:12.572955 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.572993 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.573004 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.573016 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.573026 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.573037 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.573048 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.573059 | orchestrator | 2026-01-30 03:03:12.573070 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-30 03:03:12.573081 | orchestrator | Friday 30 January 2026 03:02:08 +0000 (0:00:00.306) 0:03:48.404 ******** 2026-01-30 03:03:12.573092 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.573102 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.573113 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.573124 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.573135 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.573180 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.573191 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.573202 | orchestrator | 2026-01-30 03:03:12.573213 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-30 03:03:12.573224 | orchestrator | Friday 30 January 2026 03:02:13 +0000 (0:00:05.579) 0:03:53.984 ******** 2026-01-30 03:03:12.573237 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:03:12.573251 | orchestrator | 2026-01-30 03:03:12.573262 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-30 03:03:12.573273 | orchestrator | Friday 30 January 2026 03:02:14 +0000 (0:00:00.361) 0:03:54.346 ******** 2026-01-30 03:03:12.573284 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573294 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-30 03:03:12.573306 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573317 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-30 03:03:12.573328 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:12.573338 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:12.573367 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573379 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-30 03:03:12.573390 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573400 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:12.573411 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-30 03:03:12.573432 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:12.573450 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573471 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-30 03:03:12.573490 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573509 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:12.573560 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-30 03:03:12.573583 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:12.573602 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-30 03:03:12.573614 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-30 03:03:12.573625 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:12.573636 | orchestrator | 2026-01-30 03:03:12.573647 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-30 03:03:12.573658 | orchestrator | Friday 30 January 2026 03:02:14 +0000 (0:00:00.327) 0:03:54.673 ******** 2026-01-30 03:03:12.573669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:03:12.573692 | orchestrator | 2026-01-30 03:03:12.573703 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-30 03:03:12.573714 | orchestrator | Friday 30 January 2026 03:02:14 +0000 (0:00:00.356) 0:03:55.030 ******** 2026-01-30 03:03:12.573725 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-30 03:03:12.573736 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-30 03:03:12.573747 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:12.573758 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-30 03:03:12.573770 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:12.573780 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-30 03:03:12.573791 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:12.573802 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:12.573813 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-30 03:03:12.573829 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:12.573853 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-30 03:03:12.573880 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:12.573895 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-30 03:03:12.573912 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:12.573928 | orchestrator | 2026-01-30 03:03:12.573944 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-30 03:03:12.573982 | orchestrator | Friday 30 January 2026 03:02:14 +0000 (0:00:00.300) 0:03:55.330 ******** 2026-01-30 03:03:12.574084 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:03:12.574104 | orchestrator | 2026-01-30 03:03:12.574125 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-30 03:03:12.574196 | orchestrator | Friday 30 January 2026 03:02:15 +0000 (0:00:00.363) 0:03:55.694 ******** 2026-01-30 03:03:12.574210 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:12.574221 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:12.574232 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:12.574243 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:12.574253 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:12.574264 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:12.574275 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:12.574285 | orchestrator | 2026-01-30 03:03:12.574296 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-30 03:03:12.574307 | orchestrator | Friday 30 January 2026 03:02:49 +0000 (0:00:34.290) 0:04:29.984 ******** 2026-01-30 03:03:12.574318 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:12.574329 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:12.574339 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:12.574350 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:12.574360 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:12.574371 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:12.574382 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:12.574393 | orchestrator | 2026-01-30 03:03:12.574404 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-30 03:03:12.574414 | orchestrator | Friday 30 January 2026 03:02:57 +0000 (0:00:07.923) 0:04:37.908 ******** 2026-01-30 03:03:12.574425 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:12.574436 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:12.574446 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:12.574457 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:12.574467 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:12.574478 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:12.574489 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:12.574509 | orchestrator | 2026-01-30 03:03:12.574520 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-30 03:03:12.574531 | orchestrator | Friday 30 January 2026 03:03:05 +0000 (0:00:07.467) 0:04:45.375 ******** 2026-01-30 03:03:12.574542 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:12.574553 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:12.574564 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:12.574575 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:12.574585 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:12.574596 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:12.574607 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:12.574618 | orchestrator | 2026-01-30 03:03:12.574630 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-30 03:03:12.574641 | orchestrator | Friday 30 January 2026 03:03:06 +0000 (0:00:01.699) 0:04:47.075 ******** 2026-01-30 03:03:12.574652 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:12.574663 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:12.574674 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:12.574684 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:12.574695 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:12.574706 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:12.574717 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:12.574728 | orchestrator | 2026-01-30 03:03:12.574752 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-30 03:03:22.738313 | orchestrator | Friday 30 January 2026 03:03:12 +0000 (0:00:05.804) 0:04:52.880 ******** 2026-01-30 03:03:22.738451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:03:22.738476 | orchestrator | 2026-01-30 03:03:22.738492 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-30 03:03:22.738507 | orchestrator | Friday 30 January 2026 03:03:12 +0000 (0:00:00.364) 0:04:53.244 ******** 2026-01-30 03:03:22.738521 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:22.738535 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:22.738550 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:22.738563 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:22.738576 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:22.738589 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:22.738602 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:22.738616 | orchestrator | 2026-01-30 03:03:22.738629 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-30 03:03:22.738641 | orchestrator | Friday 30 January 2026 03:03:13 +0000 (0:00:00.745) 0:04:53.990 ******** 2026-01-30 03:03:22.738653 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:22.738668 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:22.738682 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:22.738696 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:22.738710 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:22.738724 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:22.738738 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:22.738751 | orchestrator | 2026-01-30 03:03:22.738766 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-30 03:03:22.738780 | orchestrator | Friday 30 January 2026 03:03:15 +0000 (0:00:01.785) 0:04:55.776 ******** 2026-01-30 03:03:22.738794 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:03:22.738808 | orchestrator | changed: [testbed-manager] 2026-01-30 03:03:22.738822 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:03:22.738837 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:03:22.738851 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:03:22.738864 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:03:22.738877 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:03:22.738891 | orchestrator | 2026-01-30 03:03:22.738905 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-30 03:03:22.738950 | orchestrator | Friday 30 January 2026 03:03:16 +0000 (0:00:00.739) 0:04:56.516 ******** 2026-01-30 03:03:22.738961 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.738971 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.738980 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.738989 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.738998 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:22.739007 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:22.739016 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:22.739025 | orchestrator | 2026-01-30 03:03:22.739048 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-30 03:03:22.739058 | orchestrator | Friday 30 January 2026 03:03:16 +0000 (0:00:00.266) 0:04:56.783 ******** 2026-01-30 03:03:22.739066 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.739074 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.739082 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.739090 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.739097 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:22.739105 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:22.739113 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:22.739121 | orchestrator | 2026-01-30 03:03:22.739129 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-30 03:03:22.739137 | orchestrator | Friday 30 January 2026 03:03:16 +0000 (0:00:00.353) 0:04:57.137 ******** 2026-01-30 03:03:22.739145 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:22.739182 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:22.739195 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:22.739207 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:22.739215 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:22.739223 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:22.739231 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:22.739239 | orchestrator | 2026-01-30 03:03:22.739247 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-30 03:03:22.739255 | orchestrator | Friday 30 January 2026 03:03:17 +0000 (0:00:00.283) 0:04:57.420 ******** 2026-01-30 03:03:22.739263 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.739270 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.739278 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.739286 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.739294 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:22.739302 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:22.739310 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:22.739317 | orchestrator | 2026-01-30 03:03:22.739325 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-30 03:03:22.739334 | orchestrator | Friday 30 January 2026 03:03:17 +0000 (0:00:00.234) 0:04:57.655 ******** 2026-01-30 03:03:22.739342 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:22.739350 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:22.739358 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:22.739365 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:22.739372 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:22.739379 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:22.739385 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:22.739392 | orchestrator | 2026-01-30 03:03:22.739399 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-30 03:03:22.739405 | orchestrator | Friday 30 January 2026 03:03:17 +0000 (0:00:00.314) 0:04:57.969 ******** 2026-01-30 03:03:22.739412 | orchestrator | ok: [testbed-manager] =>  2026-01-30 03:03:22.739419 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739425 | orchestrator | ok: [testbed-node-3] =>  2026-01-30 03:03:22.739432 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739439 | orchestrator | ok: [testbed-node-4] =>  2026-01-30 03:03:22.739445 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739461 | orchestrator | ok: [testbed-node-5] =>  2026-01-30 03:03:22.739475 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739512 | orchestrator | ok: [testbed-node-0] =>  2026-01-30 03:03:22.739523 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739535 | orchestrator | ok: [testbed-node-1] =>  2026-01-30 03:03:22.739546 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739557 | orchestrator | ok: [testbed-node-2] =>  2026-01-30 03:03:22.739569 | orchestrator |  docker_version: 5:27.5.1 2026-01-30 03:03:22.739592 | orchestrator | 2026-01-30 03:03:22.739608 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-30 03:03:22.739615 | orchestrator | Friday 30 January 2026 03:03:17 +0000 (0:00:00.239) 0:04:58.209 ******** 2026-01-30 03:03:22.739621 | orchestrator | ok: [testbed-manager] =>  2026-01-30 03:03:22.739628 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739635 | orchestrator | ok: [testbed-node-3] =>  2026-01-30 03:03:22.739641 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739648 | orchestrator | ok: [testbed-node-4] =>  2026-01-30 03:03:22.739654 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739661 | orchestrator | ok: [testbed-node-5] =>  2026-01-30 03:03:22.739667 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739674 | orchestrator | ok: [testbed-node-0] =>  2026-01-30 03:03:22.739680 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739687 | orchestrator | ok: [testbed-node-1] =>  2026-01-30 03:03:22.739694 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739700 | orchestrator | ok: [testbed-node-2] =>  2026-01-30 03:03:22.739707 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-30 03:03:22.739714 | orchestrator | 2026-01-30 03:03:22.739721 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-30 03:03:22.739727 | orchestrator | Friday 30 January 2026 03:03:18 +0000 (0:00:00.274) 0:04:58.484 ******** 2026-01-30 03:03:22.739734 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.739741 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.739747 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.739754 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.739761 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:22.739767 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:22.739774 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:22.739780 | orchestrator | 2026-01-30 03:03:22.739787 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-30 03:03:22.739794 | orchestrator | Friday 30 January 2026 03:03:18 +0000 (0:00:00.222) 0:04:58.706 ******** 2026-01-30 03:03:22.739801 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.739807 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.739814 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.739820 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.739827 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:03:22.739834 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:03:22.739840 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:03:22.739847 | orchestrator | 2026-01-30 03:03:22.739853 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-30 03:03:22.739860 | orchestrator | Friday 30 January 2026 03:03:18 +0000 (0:00:00.220) 0:04:58.927 ******** 2026-01-30 03:03:22.739874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:03:22.739882 | orchestrator | 2026-01-30 03:03:22.739889 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-30 03:03:22.739896 | orchestrator | Friday 30 January 2026 03:03:18 +0000 (0:00:00.320) 0:04:59.247 ******** 2026-01-30 03:03:22.739902 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:22.739909 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:22.739922 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:22.739929 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:22.739935 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:22.739942 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:22.739948 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:22.739955 | orchestrator | 2026-01-30 03:03:22.739962 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-30 03:03:22.739969 | orchestrator | Friday 30 January 2026 03:03:19 +0000 (0:00:00.853) 0:05:00.101 ******** 2026-01-30 03:03:22.739975 | orchestrator | ok: [testbed-manager] 2026-01-30 03:03:22.739982 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:03:22.739988 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:03:22.739995 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:03:22.740001 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:03:22.740008 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:03:22.740014 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:03:22.740021 | orchestrator | 2026-01-30 03:03:22.740028 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-30 03:03:22.740035 | orchestrator | Friday 30 January 2026 03:03:22 +0000 (0:00:02.633) 0:05:02.734 ******** 2026-01-30 03:03:22.740042 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-30 03:03:22.740049 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-30 03:03:22.740056 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-30 03:03:22.740063 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:03:22.740070 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-30 03:03:22.740076 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-30 03:03:22.740083 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-30 03:03:22.740090 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:03:22.740096 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-30 03:03:22.740103 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-30 03:03:22.740110 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-30 03:03:22.740116 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:03:22.740123 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-30 03:03:22.740129 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-30 03:03:22.740136 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-30 03:03:22.740143 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:03:22.740180 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-30 03:04:22.018098 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-30 03:04:22.018257 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-30 03:04:22.018275 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:22.018288 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-30 03:04:22.018300 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-30 03:04:22.018311 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-30 03:04:22.018322 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:22.018333 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-30 03:04:22.018344 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-30 03:04:22.018354 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-30 03:04:22.018365 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:22.018377 | orchestrator | 2026-01-30 03:04:22.018389 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-30 03:04:22.018402 | orchestrator | Friday 30 January 2026 03:03:22 +0000 (0:00:00.518) 0:05:03.253 ******** 2026-01-30 03:04:22.018413 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.018424 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.018436 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.018447 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.018477 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.018489 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.018507 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.018522 | orchestrator | 2026-01-30 03:04:22.018533 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-30 03:04:22.018544 | orchestrator | Friday 30 January 2026 03:03:29 +0000 (0:00:06.818) 0:05:10.071 ******** 2026-01-30 03:04:22.018555 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.018566 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.018577 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.018588 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.018599 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.018610 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.018621 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.018631 | orchestrator | 2026-01-30 03:04:22.018643 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-30 03:04:22.018654 | orchestrator | Friday 30 January 2026 03:03:30 +0000 (0:00:01.044) 0:05:11.115 ******** 2026-01-30 03:04:22.018664 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.018675 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.018686 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.018697 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.018708 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.018718 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.018729 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.018740 | orchestrator | 2026-01-30 03:04:22.018751 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-30 03:04:22.018762 | orchestrator | Friday 30 January 2026 03:03:38 +0000 (0:00:07.923) 0:05:19.039 ******** 2026-01-30 03:04:22.018773 | orchestrator | changed: [testbed-manager] 2026-01-30 03:04:22.018784 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.018795 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.018806 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.018816 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.018827 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.018838 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.018849 | orchestrator | 2026-01-30 03:04:22.018860 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-30 03:04:22.018871 | orchestrator | Friday 30 January 2026 03:03:41 +0000 (0:00:03.191) 0:05:22.231 ******** 2026-01-30 03:04:22.018882 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.018893 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.018904 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.018914 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.018925 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.018936 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.018947 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.018957 | orchestrator | 2026-01-30 03:04:22.018968 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-30 03:04:22.018979 | orchestrator | Friday 30 January 2026 03:03:43 +0000 (0:00:01.325) 0:05:23.556 ******** 2026-01-30 03:04:22.018993 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.019012 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.019037 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.019062 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.019080 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.019098 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.019115 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.019132 | orchestrator | 2026-01-30 03:04:22.019150 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-30 03:04:22.019170 | orchestrator | Friday 30 January 2026 03:03:44 +0000 (0:00:01.464) 0:05:25.021 ******** 2026-01-30 03:04:22.019189 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:22.019240 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:22.019271 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:22.019288 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:22.019305 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:22.019322 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:22.019338 | orchestrator | changed: [testbed-manager] 2026-01-30 03:04:22.019355 | orchestrator | 2026-01-30 03:04:22.019372 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-30 03:04:22.019388 | orchestrator | Friday 30 January 2026 03:03:45 +0000 (0:00:00.571) 0:05:25.593 ******** 2026-01-30 03:04:22.019404 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.019421 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.019438 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.019453 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.019470 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.019487 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.019504 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.019522 | orchestrator | 2026-01-30 03:04:22.019540 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-30 03:04:22.019584 | orchestrator | Friday 30 January 2026 03:03:54 +0000 (0:00:09.570) 0:05:35.164 ******** 2026-01-30 03:04:22.019604 | orchestrator | changed: [testbed-manager] 2026-01-30 03:04:22.019622 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.019638 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.019654 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.019670 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.019685 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.019697 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.019708 | orchestrator | 2026-01-30 03:04:22.019719 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-30 03:04:22.019730 | orchestrator | Friday 30 January 2026 03:03:55 +0000 (0:00:00.940) 0:05:36.105 ******** 2026-01-30 03:04:22.019741 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.019751 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.019762 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.019773 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.019784 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.019795 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.019805 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.019816 | orchestrator | 2026-01-30 03:04:22.019827 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-30 03:04:22.019838 | orchestrator | Friday 30 January 2026 03:04:04 +0000 (0:00:08.994) 0:05:45.100 ******** 2026-01-30 03:04:22.019849 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.019860 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.019871 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.019881 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.019892 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.019903 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.019914 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.019924 | orchestrator | 2026-01-30 03:04:22.019935 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-30 03:04:22.019946 | orchestrator | Friday 30 January 2026 03:04:15 +0000 (0:00:10.951) 0:05:56.052 ******** 2026-01-30 03:04:22.019957 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-30 03:04:22.019968 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-30 03:04:22.019979 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-30 03:04:22.019990 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-30 03:04:22.020001 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-30 03:04:22.020011 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-30 03:04:22.020022 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-30 03:04:22.020033 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-30 03:04:22.020052 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-30 03:04:22.020063 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-30 03:04:22.020074 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-30 03:04:22.020085 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-30 03:04:22.020133 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-30 03:04:22.020149 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-30 03:04:22.020160 | orchestrator | 2026-01-30 03:04:22.020171 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-30 03:04:22.020183 | orchestrator | Friday 30 January 2026 03:04:16 +0000 (0:00:01.143) 0:05:57.196 ******** 2026-01-30 03:04:22.020193 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:22.020233 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:22.020245 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:22.020256 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:22.020267 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:22.020278 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:22.020289 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:22.020300 | orchestrator | 2026-01-30 03:04:22.020311 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-30 03:04:22.020322 | orchestrator | Friday 30 January 2026 03:04:17 +0000 (0:00:00.483) 0:05:57.679 ******** 2026-01-30 03:04:22.020333 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:22.020344 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:22.020355 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:22.020366 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:22.020377 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:22.020388 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:22.020399 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:22.020409 | orchestrator | 2026-01-30 03:04:22.020420 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-30 03:04:22.020433 | orchestrator | Friday 30 January 2026 03:04:21 +0000 (0:00:03.751) 0:06:01.431 ******** 2026-01-30 03:04:22.020444 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:22.020455 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:22.020466 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:22.020476 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:22.020487 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:22.020498 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:22.020509 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:22.020520 | orchestrator | 2026-01-30 03:04:22.020532 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-30 03:04:22.020544 | orchestrator | Friday 30 January 2026 03:04:21 +0000 (0:00:00.481) 0:06:01.912 ******** 2026-01-30 03:04:22.020555 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-30 03:04:22.020566 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-30 03:04:22.020577 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:22.020588 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-30 03:04:22.020599 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-30 03:04:22.020610 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:22.020621 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-30 03:04:22.020632 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-30 03:04:22.020643 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:22.020663 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-30 03:04:40.533716 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-30 03:04:40.533840 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:40.533856 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-30 03:04:40.533892 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-30 03:04:40.533905 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:40.533916 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-30 03:04:40.533927 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-30 03:04:40.533938 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:40.533948 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-30 03:04:40.533959 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-30 03:04:40.533970 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:40.533982 | orchestrator | 2026-01-30 03:04:40.533995 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-30 03:04:40.534008 | orchestrator | Friday 30 January 2026 03:04:22 +0000 (0:00:00.672) 0:06:02.584 ******** 2026-01-30 03:04:40.534080 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:40.534092 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:40.534103 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:40.534114 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:40.534125 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:40.534136 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:40.534147 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:40.534157 | orchestrator | 2026-01-30 03:04:40.534168 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-30 03:04:40.534196 | orchestrator | Friday 30 January 2026 03:04:22 +0000 (0:00:00.480) 0:06:03.065 ******** 2026-01-30 03:04:40.534257 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:40.534271 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:40.534284 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:40.534304 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:40.534322 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:40.534341 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:40.534360 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:40.534379 | orchestrator | 2026-01-30 03:04:40.534400 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-30 03:04:40.534422 | orchestrator | Friday 30 January 2026 03:04:23 +0000 (0:00:00.500) 0:06:03.565 ******** 2026-01-30 03:04:40.534442 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:40.534464 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:04:40.534484 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:04:40.534502 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:04:40.534514 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:04:40.534527 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:04:40.534540 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:04:40.534552 | orchestrator | 2026-01-30 03:04:40.534565 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-30 03:04:40.534578 | orchestrator | Friday 30 January 2026 03:04:23 +0000 (0:00:00.494) 0:06:04.060 ******** 2026-01-30 03:04:40.534591 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.534605 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.534618 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.534629 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.534640 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.534650 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.534661 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.534671 | orchestrator | 2026-01-30 03:04:40.534682 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-30 03:04:40.534693 | orchestrator | Friday 30 January 2026 03:04:25 +0000 (0:00:01.850) 0:06:05.911 ******** 2026-01-30 03:04:40.534705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:04:40.534719 | orchestrator | 2026-01-30 03:04:40.534747 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-30 03:04:40.534759 | orchestrator | Friday 30 January 2026 03:04:26 +0000 (0:00:00.804) 0:06:06.715 ******** 2026-01-30 03:04:40.534769 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.534780 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:40.534791 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:40.534802 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:40.534812 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:40.534823 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:40.534834 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:40.534845 | orchestrator | 2026-01-30 03:04:40.534856 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-30 03:04:40.534866 | orchestrator | Friday 30 January 2026 03:04:27 +0000 (0:00:00.802) 0:06:07.517 ******** 2026-01-30 03:04:40.534877 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.534888 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:40.534899 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:40.534909 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:40.534920 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:40.534931 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:40.534942 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:40.534952 | orchestrator | 2026-01-30 03:04:40.534967 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-30 03:04:40.534985 | orchestrator | Friday 30 January 2026 03:04:28 +0000 (0:00:00.816) 0:06:08.334 ******** 2026-01-30 03:04:40.535002 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.535020 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:40.535037 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:40.535057 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:40.535076 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:40.535093 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:40.535109 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:40.535119 | orchestrator | 2026-01-30 03:04:40.535130 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-30 03:04:40.535161 | orchestrator | Friday 30 January 2026 03:04:29 +0000 (0:00:01.458) 0:06:09.792 ******** 2026-01-30 03:04:40.535173 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:04:40.535184 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.535195 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.535206 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.535270 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.535283 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.535293 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.535304 | orchestrator | 2026-01-30 03:04:40.535315 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-30 03:04:40.535326 | orchestrator | Friday 30 January 2026 03:04:30 +0000 (0:00:01.330) 0:06:11.123 ******** 2026-01-30 03:04:40.535337 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.535348 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:40.535358 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:40.535369 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:40.535380 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:40.535391 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:40.535401 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:40.535412 | orchestrator | 2026-01-30 03:04:40.535423 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-30 03:04:40.535434 | orchestrator | Friday 30 January 2026 03:04:32 +0000 (0:00:01.281) 0:06:12.404 ******** 2026-01-30 03:04:40.535445 | orchestrator | changed: [testbed-manager] 2026-01-30 03:04:40.535455 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:04:40.535466 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:04:40.535477 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:04:40.535488 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:04:40.535512 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:04:40.535523 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:04:40.535538 | orchestrator | 2026-01-30 03:04:40.535556 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-30 03:04:40.535574 | orchestrator | Friday 30 January 2026 03:04:33 +0000 (0:00:01.345) 0:06:13.749 ******** 2026-01-30 03:04:40.535594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:04:40.535612 | orchestrator | 2026-01-30 03:04:40.535625 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-30 03:04:40.535636 | orchestrator | Friday 30 January 2026 03:04:34 +0000 (0:00:00.951) 0:06:14.701 ******** 2026-01-30 03:04:40.535646 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.535657 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.535668 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.535678 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.535689 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.535699 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.535710 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.535721 | orchestrator | 2026-01-30 03:04:40.535732 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-30 03:04:40.535758 | orchestrator | Friday 30 January 2026 03:04:35 +0000 (0:00:01.442) 0:06:16.143 ******** 2026-01-30 03:04:40.535769 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.535780 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.535791 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.535801 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.535812 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.535822 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.535833 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.535843 | orchestrator | 2026-01-30 03:04:40.535854 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-30 03:04:40.535865 | orchestrator | Friday 30 January 2026 03:04:36 +0000 (0:00:01.160) 0:06:17.303 ******** 2026-01-30 03:04:40.535876 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.535886 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.535905 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.535920 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.535936 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.535955 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.535973 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.535992 | orchestrator | 2026-01-30 03:04:40.536003 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-30 03:04:40.536014 | orchestrator | Friday 30 January 2026 03:04:38 +0000 (0:00:01.113) 0:06:18.417 ******** 2026-01-30 03:04:40.536025 | orchestrator | ok: [testbed-manager] 2026-01-30 03:04:40.536035 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:04:40.536046 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:04:40.536056 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:04:40.536067 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:04:40.536077 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:04:40.536088 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:04:40.536098 | orchestrator | 2026-01-30 03:04:40.536109 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-30 03:04:40.536119 | orchestrator | Friday 30 January 2026 03:04:39 +0000 (0:00:01.299) 0:06:19.716 ******** 2026-01-30 03:04:40.536130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:04:40.536141 | orchestrator | 2026-01-30 03:04:40.536152 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:04:40.536163 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.845) 0:06:20.562 ******** 2026-01-30 03:04:40.536182 | orchestrator | 2026-01-30 03:04:40.536193 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:04:40.536203 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.038) 0:06:20.600 ******** 2026-01-30 03:04:40.536262 | orchestrator | 2026-01-30 03:04:40.536277 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:04:40.536288 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.043) 0:06:20.644 ******** 2026-01-30 03:04:40.536299 | orchestrator | 2026-01-30 03:04:40.536310 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:04:40.536337 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.037) 0:06:20.681 ******** 2026-01-30 03:05:05.688852 | orchestrator | 2026-01-30 03:05:05.688968 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:05:05.688985 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.037) 0:06:20.718 ******** 2026-01-30 03:05:05.688996 | orchestrator | 2026-01-30 03:05:05.689006 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:05:05.689016 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.043) 0:06:20.761 ******** 2026-01-30 03:05:05.689026 | orchestrator | 2026-01-30 03:05:05.689036 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-30 03:05:05.689046 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.040) 0:06:20.801 ******** 2026-01-30 03:05:05.689056 | orchestrator | 2026-01-30 03:05:05.689066 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-30 03:05:05.689076 | orchestrator | Friday 30 January 2026 03:04:40 +0000 (0:00:00.038) 0:06:20.840 ******** 2026-01-30 03:05:05.689086 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:05.689097 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:05.689106 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:05.689116 | orchestrator | 2026-01-30 03:05:05.689126 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-30 03:05:05.689136 | orchestrator | Friday 30 January 2026 03:04:41 +0000 (0:00:01.116) 0:06:21.956 ******** 2026-01-30 03:05:05.689146 | orchestrator | changed: [testbed-manager] 2026-01-30 03:05:05.689157 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:05.689166 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:05.689176 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:05.689186 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:05.689195 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:05.689205 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:05.689215 | orchestrator | 2026-01-30 03:05:05.689224 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-30 03:05:05.689300 | orchestrator | Friday 30 January 2026 03:04:43 +0000 (0:00:01.539) 0:06:23.496 ******** 2026-01-30 03:05:05.689312 | orchestrator | changed: [testbed-manager] 2026-01-30 03:05:05.689322 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:05.689332 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:05.689341 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:05.689351 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:05.689361 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:05.689370 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:05.689382 | orchestrator | 2026-01-30 03:05:05.689393 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-30 03:05:05.689404 | orchestrator | Friday 30 January 2026 03:04:44 +0000 (0:00:01.189) 0:06:24.686 ******** 2026-01-30 03:05:05.689416 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:05.689427 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:05.689439 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:05.689450 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:05.689462 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:05.689473 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:05.689499 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:05.689534 | orchestrator | 2026-01-30 03:05:05.689546 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-30 03:05:05.689557 | orchestrator | Friday 30 January 2026 03:04:46 +0000 (0:00:02.260) 0:06:26.946 ******** 2026-01-30 03:05:05.689568 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:05.689580 | orchestrator | 2026-01-30 03:05:05.689592 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-30 03:05:05.689605 | orchestrator | Friday 30 January 2026 03:04:46 +0000 (0:00:00.085) 0:06:27.031 ******** 2026-01-30 03:05:05.689617 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.689628 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:05.689641 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:05.689652 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:05.689663 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:05.689675 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:05.689687 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:05.689699 | orchestrator | 2026-01-30 03:05:05.689710 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-30 03:05:05.689723 | orchestrator | Friday 30 January 2026 03:04:47 +0000 (0:00:00.966) 0:06:27.998 ******** 2026-01-30 03:05:05.689735 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:05.689744 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:05.689754 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:05.689763 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:05.689773 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:05.689783 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:05.689792 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:05.689802 | orchestrator | 2026-01-30 03:05:05.689811 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-30 03:05:05.689821 | orchestrator | Friday 30 January 2026 03:04:48 +0000 (0:00:00.520) 0:06:28.519 ******** 2026-01-30 03:05:05.689833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:05:05.689845 | orchestrator | 2026-01-30 03:05:05.689855 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-30 03:05:05.689864 | orchestrator | Friday 30 January 2026 03:04:49 +0000 (0:00:01.013) 0:06:29.532 ******** 2026-01-30 03:05:05.689874 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.689884 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:05.689893 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:05.689903 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:05.689913 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:05.689922 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:05.689932 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:05.689942 | orchestrator | 2026-01-30 03:05:05.689951 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-30 03:05:05.689961 | orchestrator | Friday 30 January 2026 03:04:50 +0000 (0:00:00.812) 0:06:30.345 ******** 2026-01-30 03:05:05.689971 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-30 03:05:05.689998 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-30 03:05:05.690010 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-30 03:05:05.690077 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-30 03:05:05.690088 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-30 03:05:05.690097 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-30 03:05:05.690107 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-30 03:05:05.690117 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-30 03:05:05.690127 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-30 03:05:05.690137 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-30 03:05:05.690165 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-30 03:05:05.690175 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-30 03:05:05.690185 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-30 03:05:05.690195 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-30 03:05:05.690205 | orchestrator | 2026-01-30 03:05:05.690215 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-30 03:05:05.690224 | orchestrator | Friday 30 January 2026 03:04:52 +0000 (0:00:02.381) 0:06:32.727 ******** 2026-01-30 03:05:05.690256 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:05.690267 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:05.690277 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:05.690286 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:05.690296 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:05.690305 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:05.690315 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:05.690325 | orchestrator | 2026-01-30 03:05:05.690334 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-30 03:05:05.690344 | orchestrator | Friday 30 January 2026 03:04:52 +0000 (0:00:00.599) 0:06:33.326 ******** 2026-01-30 03:05:05.690356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:05:05.690368 | orchestrator | 2026-01-30 03:05:05.690377 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-30 03:05:05.690387 | orchestrator | Friday 30 January 2026 03:04:53 +0000 (0:00:00.732) 0:06:34.059 ******** 2026-01-30 03:05:05.690397 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.690406 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:05.690416 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:05.690426 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:05.690435 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:05.690451 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:05.690461 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:05.690471 | orchestrator | 2026-01-30 03:05:05.690481 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-30 03:05:05.690490 | orchestrator | Friday 30 January 2026 03:04:54 +0000 (0:00:00.818) 0:06:34.877 ******** 2026-01-30 03:05:05.690500 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.690510 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:05.690519 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:05.690529 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:05.690538 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:05.690548 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:05.690558 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:05.690567 | orchestrator | 2026-01-30 03:05:05.690577 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-30 03:05:05.690587 | orchestrator | Friday 30 January 2026 03:04:55 +0000 (0:00:00.960) 0:06:35.838 ******** 2026-01-30 03:05:05.690596 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:05.690606 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:05.690616 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:05.690625 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:05.690635 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:05.690644 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:05.690654 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:05.690663 | orchestrator | 2026-01-30 03:05:05.690673 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-30 03:05:05.690683 | orchestrator | Friday 30 January 2026 03:04:55 +0000 (0:00:00.473) 0:06:36.311 ******** 2026-01-30 03:05:05.690693 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.690702 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:05.690712 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:05.690727 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:05.690737 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:05.690746 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:05.690756 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:05.690766 | orchestrator | 2026-01-30 03:05:05.690775 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-30 03:05:05.690785 | orchestrator | Friday 30 January 2026 03:04:57 +0000 (0:00:01.414) 0:06:37.725 ******** 2026-01-30 03:05:05.690795 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:05.690804 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:05.690814 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:05.690824 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:05.690833 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:05.690843 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:05.690853 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:05.690862 | orchestrator | 2026-01-30 03:05:05.690872 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-30 03:05:05.690882 | orchestrator | Friday 30 January 2026 03:04:57 +0000 (0:00:00.453) 0:06:38.179 ******** 2026-01-30 03:05:05.690892 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:05.690902 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:05.690911 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:05.690921 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:05.690931 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:05.690940 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:05.690958 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:36.509222 | orchestrator | 2026-01-30 03:05:36.509416 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-30 03:05:36.509439 | orchestrator | Friday 30 January 2026 03:05:05 +0000 (0:00:07.820) 0:06:46.000 ******** 2026-01-30 03:05:36.509451 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.509464 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:36.509476 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:36.509487 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:36.509498 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:36.509509 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:36.509522 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:36.509541 | orchestrator | 2026-01-30 03:05:36.509559 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-30 03:05:36.509578 | orchestrator | Friday 30 January 2026 03:05:07 +0000 (0:00:01.642) 0:06:47.642 ******** 2026-01-30 03:05:36.509597 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.509615 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:36.509635 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:36.509655 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:36.509674 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:36.509686 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:36.509697 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:36.509707 | orchestrator | 2026-01-30 03:05:36.509718 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-30 03:05:36.509729 | orchestrator | Friday 30 January 2026 03:05:08 +0000 (0:00:01.586) 0:06:49.229 ******** 2026-01-30 03:05:36.509740 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.509754 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:36.509767 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:36.509779 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:36.509791 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:36.509803 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:36.509816 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:36.509827 | orchestrator | 2026-01-30 03:05:36.509841 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-30 03:05:36.509854 | orchestrator | Friday 30 January 2026 03:05:10 +0000 (0:00:01.551) 0:06:50.780 ******** 2026-01-30 03:05:36.509897 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.509918 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.509936 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.509954 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.509974 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.509992 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.510012 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.510106 | orchestrator | 2026-01-30 03:05:36.510119 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-30 03:05:36.510131 | orchestrator | Friday 30 January 2026 03:05:11 +0000 (0:00:00.787) 0:06:51.567 ******** 2026-01-30 03:05:36.510142 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:36.510153 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:36.510164 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:36.510174 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:36.510185 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:36.510197 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:36.510208 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:36.510218 | orchestrator | 2026-01-30 03:05:36.510230 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-30 03:05:36.510241 | orchestrator | Friday 30 January 2026 03:05:12 +0000 (0:00:00.911) 0:06:52.479 ******** 2026-01-30 03:05:36.510302 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:36.510324 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:36.510343 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:36.510361 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:36.510380 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:36.510399 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:36.510417 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:36.510434 | orchestrator | 2026-01-30 03:05:36.510445 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-30 03:05:36.510456 | orchestrator | Friday 30 January 2026 03:05:12 +0000 (0:00:00.474) 0:06:52.953 ******** 2026-01-30 03:05:36.510467 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.510478 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.510489 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.510500 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.510528 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.510539 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.510550 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.510561 | orchestrator | 2026-01-30 03:05:36.510571 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-30 03:05:36.510582 | orchestrator | Friday 30 January 2026 03:05:13 +0000 (0:00:00.490) 0:06:53.444 ******** 2026-01-30 03:05:36.510593 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.510605 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.510615 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.510626 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.510636 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.510647 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.510662 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.510681 | orchestrator | 2026-01-30 03:05:36.510698 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-30 03:05:36.510717 | orchestrator | Friday 30 January 2026 03:05:13 +0000 (0:00:00.493) 0:06:53.938 ******** 2026-01-30 03:05:36.510735 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.510754 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.510773 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.510792 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.510804 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.510814 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.510825 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.510835 | orchestrator | 2026-01-30 03:05:36.510846 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-30 03:05:36.510857 | orchestrator | Friday 30 January 2026 03:05:14 +0000 (0:00:00.658) 0:06:54.596 ******** 2026-01-30 03:05:36.510880 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.510891 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.510902 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.510912 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.510923 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.510934 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.510944 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.510955 | orchestrator | 2026-01-30 03:05:36.510987 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-30 03:05:36.510999 | orchestrator | Friday 30 January 2026 03:05:19 +0000 (0:00:05.527) 0:07:00.124 ******** 2026-01-30 03:05:36.511010 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:05:36.511022 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:05:36.511042 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:05:36.511059 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:05:36.511077 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:05:36.511096 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:05:36.511114 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:05:36.511134 | orchestrator | 2026-01-30 03:05:36.511153 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-30 03:05:36.511172 | orchestrator | Friday 30 January 2026 03:05:20 +0000 (0:00:00.488) 0:07:00.613 ******** 2026-01-30 03:05:36.511186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:05:36.511199 | orchestrator | 2026-01-30 03:05:36.511210 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-30 03:05:36.511221 | orchestrator | Friday 30 January 2026 03:05:21 +0000 (0:00:00.927) 0:07:01.540 ******** 2026-01-30 03:05:36.511232 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.511243 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.511279 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.511292 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.511303 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.511313 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.511324 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.511335 | orchestrator | 2026-01-30 03:05:36.511346 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-30 03:05:36.511357 | orchestrator | Friday 30 January 2026 03:05:23 +0000 (0:00:01.856) 0:07:03.397 ******** 2026-01-30 03:05:36.511368 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.511379 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.511389 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.511407 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.511426 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.511445 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.511546 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.511566 | orchestrator | 2026-01-30 03:05:36.511577 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-30 03:05:36.511589 | orchestrator | Friday 30 January 2026 03:05:24 +0000 (0:00:01.018) 0:07:04.416 ******** 2026-01-30 03:05:36.511600 | orchestrator | ok: [testbed-manager] 2026-01-30 03:05:36.511612 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:05:36.511623 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:05:36.511634 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:05:36.511645 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:05:36.511657 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:05:36.511668 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:05:36.511679 | orchestrator | 2026-01-30 03:05:36.511699 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-30 03:05:36.511711 | orchestrator | Friday 30 January 2026 03:05:24 +0000 (0:00:00.734) 0:07:05.150 ******** 2026-01-30 03:05:36.511723 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511754 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511772 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511790 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511809 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511828 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511846 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-30 03:05:36.511864 | orchestrator | 2026-01-30 03:05:36.511882 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-30 03:05:36.511902 | orchestrator | Friday 30 January 2026 03:05:26 +0000 (0:00:01.727) 0:07:06.878 ******** 2026-01-30 03:05:36.511922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:05:36.511943 | orchestrator | 2026-01-30 03:05:36.511961 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-30 03:05:36.511980 | orchestrator | Friday 30 January 2026 03:05:27 +0000 (0:00:00.750) 0:07:07.629 ******** 2026-01-30 03:05:36.511999 | orchestrator | changed: [testbed-manager] 2026-01-30 03:05:36.512017 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:05:36.512032 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:05:36.512048 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:05:36.512064 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:05:36.512081 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:05:36.512099 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:05:36.512115 | orchestrator | 2026-01-30 03:05:36.512147 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-30 03:06:06.865995 | orchestrator | Friday 30 January 2026 03:05:36 +0000 (0:00:09.190) 0:07:16.819 ******** 2026-01-30 03:06:06.866168 | orchestrator | ok: [testbed-manager] 2026-01-30 03:06:06.866185 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:06.866197 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:06.866208 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:06.866219 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:06.866230 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:06.866242 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:06.866253 | orchestrator | 2026-01-30 03:06:06.866265 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-30 03:06:06.866321 | orchestrator | Friday 30 January 2026 03:05:38 +0000 (0:00:01.873) 0:07:18.693 ******** 2026-01-30 03:06:06.866334 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:06.866345 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:06.866356 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:06.866367 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:06.866378 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:06.866389 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:06.866400 | orchestrator | 2026-01-30 03:06:06.866411 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-30 03:06:06.866423 | orchestrator | Friday 30 January 2026 03:05:39 +0000 (0:00:01.278) 0:07:19.972 ******** 2026-01-30 03:06:06.866434 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.866447 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.866486 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.866497 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.866508 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.866523 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.866537 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.866550 | orchestrator | 2026-01-30 03:06:06.866563 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-30 03:06:06.866576 | orchestrator | 2026-01-30 03:06:06.866589 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-30 03:06:06.866602 | orchestrator | Friday 30 January 2026 03:05:40 +0000 (0:00:01.228) 0:07:21.200 ******** 2026-01-30 03:06:06.866615 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:06:06.866629 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:06:06.866642 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:06:06.866655 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:06:06.866667 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:06:06.866680 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:06:06.866692 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:06:06.866706 | orchestrator | 2026-01-30 03:06:06.866719 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-30 03:06:06.866732 | orchestrator | 2026-01-30 03:06:06.866745 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-30 03:06:06.866758 | orchestrator | Friday 30 January 2026 03:05:41 +0000 (0:00:00.632) 0:07:21.833 ******** 2026-01-30 03:06:06.866771 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.866783 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.866796 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.866809 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.866836 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.866850 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.866863 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.866876 | orchestrator | 2026-01-30 03:06:06.866888 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-30 03:06:06.866899 | orchestrator | Friday 30 January 2026 03:05:42 +0000 (0:00:01.272) 0:07:23.105 ******** 2026-01-30 03:06:06.866910 | orchestrator | ok: [testbed-manager] 2026-01-30 03:06:06.866921 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:06.866932 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:06.866943 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:06.866954 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:06.866965 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:06.866976 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:06.866987 | orchestrator | 2026-01-30 03:06:06.866999 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-30 03:06:06.867010 | orchestrator | Friday 30 January 2026 03:05:44 +0000 (0:00:01.353) 0:07:24.459 ******** 2026-01-30 03:06:06.867021 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:06:06.867032 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:06:06.867043 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:06:06.867054 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:06:06.867065 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:06:06.867076 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:06:06.867087 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:06:06.867098 | orchestrator | 2026-01-30 03:06:06.867109 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-30 03:06:06.867121 | orchestrator | Friday 30 January 2026 03:05:44 +0000 (0:00:00.425) 0:07:24.885 ******** 2026-01-30 03:06:06.867132 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:06:06.867145 | orchestrator | 2026-01-30 03:06:06.867156 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-30 03:06:06.867167 | orchestrator | Friday 30 January 2026 03:05:45 +0000 (0:00:00.776) 0:07:25.662 ******** 2026-01-30 03:06:06.867187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:06:06.867200 | orchestrator | 2026-01-30 03:06:06.867211 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-30 03:06:06.867222 | orchestrator | Friday 30 January 2026 03:05:46 +0000 (0:00:00.688) 0:07:26.350 ******** 2026-01-30 03:06:06.867233 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867245 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867256 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867266 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867296 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867308 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867318 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867329 | orchestrator | 2026-01-30 03:06:06.867357 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-30 03:06:06.867369 | orchestrator | Friday 30 January 2026 03:05:54 +0000 (0:00:08.886) 0:07:35.237 ******** 2026-01-30 03:06:06.867380 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867391 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867402 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867413 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867424 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867435 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867446 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867456 | orchestrator | 2026-01-30 03:06:06.867468 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-30 03:06:06.867479 | orchestrator | Friday 30 January 2026 03:05:55 +0000 (0:00:00.814) 0:07:36.051 ******** 2026-01-30 03:06:06.867490 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867500 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867511 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867522 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867533 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867544 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867554 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867565 | orchestrator | 2026-01-30 03:06:06.867576 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-30 03:06:06.867588 | orchestrator | Friday 30 January 2026 03:05:57 +0000 (0:00:01.335) 0:07:37.387 ******** 2026-01-30 03:06:06.867599 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867609 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867620 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867631 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867642 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867653 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867663 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867674 | orchestrator | 2026-01-30 03:06:06.867685 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-30 03:06:06.867696 | orchestrator | Friday 30 January 2026 03:05:58 +0000 (0:00:01.800) 0:07:39.188 ******** 2026-01-30 03:06:06.867707 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867718 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867729 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867740 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867751 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867762 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867773 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867784 | orchestrator | 2026-01-30 03:06:06.867795 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-30 03:06:06.867806 | orchestrator | Friday 30 January 2026 03:06:00 +0000 (0:00:01.180) 0:07:40.369 ******** 2026-01-30 03:06:06.867827 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.867838 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.867849 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.867860 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.867877 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.867888 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.867899 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.867909 | orchestrator | 2026-01-30 03:06:06.867920 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-30 03:06:06.867931 | orchestrator | 2026-01-30 03:06:06.867943 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-30 03:06:06.867954 | orchestrator | Friday 30 January 2026 03:06:02 +0000 (0:00:02.079) 0:07:42.448 ******** 2026-01-30 03:06:06.867965 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:06:06.867976 | orchestrator | 2026-01-30 03:06:06.867987 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-30 03:06:06.867998 | orchestrator | Friday 30 January 2026 03:06:02 +0000 (0:00:00.771) 0:07:43.220 ******** 2026-01-30 03:06:06.868009 | orchestrator | ok: [testbed-manager] 2026-01-30 03:06:06.868020 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:06.868031 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:06.868042 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:06.868052 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:06.868063 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:06.868074 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:06.868085 | orchestrator | 2026-01-30 03:06:06.868096 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-30 03:06:06.868107 | orchestrator | Friday 30 January 2026 03:06:03 +0000 (0:00:01.017) 0:07:44.237 ******** 2026-01-30 03:06:06.868118 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:06.868129 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:06.868140 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:06.868151 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:06.868162 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:06.868173 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:06.868184 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:06.868195 | orchestrator | 2026-01-30 03:06:06.868207 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-30 03:06:06.868218 | orchestrator | Friday 30 January 2026 03:06:05 +0000 (0:00:01.101) 0:07:45.339 ******** 2026-01-30 03:06:06.868229 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:06:06.868240 | orchestrator | 2026-01-30 03:06:06.868251 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-30 03:06:06.868262 | orchestrator | Friday 30 January 2026 03:06:05 +0000 (0:00:00.974) 0:07:46.313 ******** 2026-01-30 03:06:06.868316 | orchestrator | ok: [testbed-manager] 2026-01-30 03:06:06.868329 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:06.868340 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:06.868351 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:06.868362 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:06.868394 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:06.868406 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:06.868417 | orchestrator | 2026-01-30 03:06:06.868435 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-30 03:06:08.349755 | orchestrator | Friday 30 January 2026 03:06:06 +0000 (0:00:00.867) 0:07:47.181 ******** 2026-01-30 03:06:08.349857 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:08.349874 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:08.349885 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:08.349896 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:08.349935 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:08.349946 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:08.349958 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:08.349969 | orchestrator | 2026-01-30 03:06:08.349981 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:06:08.349993 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-30 03:06:08.350006 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-30 03:06:08.350078 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-30 03:06:08.350091 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-30 03:06:08.350102 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-30 03:06:08.350113 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-30 03:06:08.350124 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-30 03:06:08.350135 | orchestrator | 2026-01-30 03:06:08.350147 | orchestrator | 2026-01-30 03:06:08.350158 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:06:08.350170 | orchestrator | Friday 30 January 2026 03:06:07 +0000 (0:00:01.056) 0:07:48.238 ******** 2026-01-30 03:06:08.350181 | orchestrator | =============================================================================== 2026-01-30 03:06:08.350191 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.92s 2026-01-30 03:06:08.350202 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.29s 2026-01-30 03:06:08.350228 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.19s 2026-01-30 03:06:08.350239 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.78s 2026-01-30 03:06:08.350250 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.11s 2026-01-30 03:06:08.350262 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.95s 2026-01-30 03:06:08.350300 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 9.97s 2026-01-30 03:06:08.350324 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.57s 2026-01-30 03:06:08.350338 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.19s 2026-01-30 03:06:08.350351 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.99s 2026-01-30 03:06:08.350364 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.89s 2026-01-30 03:06:08.350376 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.16s 2026-01-30 03:06:08.350389 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.92s 2026-01-30 03:06:08.350402 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.92s 2026-01-30 03:06:08.350414 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.82s 2026-01-30 03:06:08.350426 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.47s 2026-01-30 03:06:08.350439 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.82s 2026-01-30 03:06:08.350452 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.80s 2026-01-30 03:06:08.350465 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.58s 2026-01-30 03:06:08.350487 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.53s 2026-01-30 03:06:08.609023 | orchestrator | + osism apply fail2ban 2026-01-30 03:06:21.081053 | orchestrator | 2026-01-30 03:06:21 | INFO  | Task 89f338e1-9f13-4fa0-882c-2d25dbd9781d (fail2ban) was prepared for execution. 2026-01-30 03:06:21.081168 | orchestrator | 2026-01-30 03:06:21 | INFO  | It takes a moment until task 89f338e1-9f13-4fa0-882c-2d25dbd9781d (fail2ban) has been started and output is visible here. 2026-01-30 03:06:41.935881 | orchestrator | 2026-01-30 03:06:41.935999 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-30 03:06:41.936016 | orchestrator | 2026-01-30 03:06:41.936029 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-30 03:06:41.936041 | orchestrator | Friday 30 January 2026 03:06:24 +0000 (0:00:00.236) 0:00:00.236 ******** 2026-01-30 03:06:41.936054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:06:41.936086 | orchestrator | 2026-01-30 03:06:41.936097 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-30 03:06:41.936109 | orchestrator | Friday 30 January 2026 03:06:25 +0000 (0:00:00.977) 0:00:01.214 ******** 2026-01-30 03:06:41.936120 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:41.936133 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:41.936144 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:41.936155 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:41.936166 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:41.936177 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:41.936189 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:41.936200 | orchestrator | 2026-01-30 03:06:41.936211 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-30 03:06:41.936222 | orchestrator | Friday 30 January 2026 03:06:37 +0000 (0:00:11.224) 0:00:12.438 ******** 2026-01-30 03:06:41.936233 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:41.936245 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:41.936255 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:41.936266 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:41.936277 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:41.936288 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:41.936342 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:41.936354 | orchestrator | 2026-01-30 03:06:41.936366 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-30 03:06:41.936377 | orchestrator | Friday 30 January 2026 03:06:38 +0000 (0:00:01.428) 0:00:13.867 ******** 2026-01-30 03:06:41.936388 | orchestrator | ok: [testbed-manager] 2026-01-30 03:06:41.936401 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:06:41.936412 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:06:41.936425 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:06:41.936438 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:06:41.936451 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:06:41.936463 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:06:41.936477 | orchestrator | 2026-01-30 03:06:41.936490 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-30 03:06:41.936504 | orchestrator | Friday 30 January 2026 03:06:39 +0000 (0:00:01.396) 0:00:15.263 ******** 2026-01-30 03:06:41.936518 | orchestrator | changed: [testbed-manager] 2026-01-30 03:06:41.936531 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:06:41.936544 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:06:41.936556 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:06:41.936567 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:06:41.936578 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:06:41.936589 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:06:41.936600 | orchestrator | 2026-01-30 03:06:41.936635 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:06:41.936662 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936674 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936685 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936697 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936708 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936719 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936730 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:06:41.936742 | orchestrator | 2026-01-30 03:06:41.936753 | orchestrator | 2026-01-30 03:06:41.936764 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:06:41.936775 | orchestrator | Friday 30 January 2026 03:06:41 +0000 (0:00:01.571) 0:00:16.835 ******** 2026-01-30 03:06:41.936787 | orchestrator | =============================================================================== 2026-01-30 03:06:41.936798 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.22s 2026-01-30 03:06:41.936809 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.57s 2026-01-30 03:06:41.936820 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.43s 2026-01-30 03:06:41.936831 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.40s 2026-01-30 03:06:41.936843 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 0.98s 2026-01-30 03:06:42.193194 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-30 03:06:42.193293 | orchestrator | + osism apply network 2026-01-30 03:06:54.226295 | orchestrator | 2026-01-30 03:06:54 | INFO  | Task 22dfdfe8-20da-4b11-9c94-9f553634e2b2 (network) was prepared for execution. 2026-01-30 03:06:54.226466 | orchestrator | 2026-01-30 03:06:54 | INFO  | It takes a moment until task 22dfdfe8-20da-4b11-9c94-9f553634e2b2 (network) has been started and output is visible here. 2026-01-30 03:07:21.167274 | orchestrator | 2026-01-30 03:07:21.167424 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-30 03:07:21.167443 | orchestrator | 2026-01-30 03:07:21.167456 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-30 03:07:21.167468 | orchestrator | Friday 30 January 2026 03:06:58 +0000 (0:00:00.186) 0:00:00.186 ******** 2026-01-30 03:07:21.167480 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.167492 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.167503 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.167515 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.167526 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.167537 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.167548 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.167559 | orchestrator | 2026-01-30 03:07:21.167570 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-30 03:07:21.167581 | orchestrator | Friday 30 January 2026 03:06:58 +0000 (0:00:00.511) 0:00:00.698 ******** 2026-01-30 03:07:21.167593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:07:21.167631 | orchestrator | 2026-01-30 03:07:21.167642 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-30 03:07:21.167653 | orchestrator | Friday 30 January 2026 03:06:59 +0000 (0:00:00.964) 0:00:01.663 ******** 2026-01-30 03:07:21.167664 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.167675 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.167686 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.167697 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.167707 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.167718 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.167729 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.167740 | orchestrator | 2026-01-30 03:07:21.167751 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-30 03:07:21.167762 | orchestrator | Friday 30 January 2026 03:07:01 +0000 (0:00:01.925) 0:00:03.588 ******** 2026-01-30 03:07:21.167774 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.167785 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.167796 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.167807 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.167818 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.167828 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.167839 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.167850 | orchestrator | 2026-01-30 03:07:21.167861 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-30 03:07:21.167872 | orchestrator | Friday 30 January 2026 03:07:03 +0000 (0:00:01.688) 0:00:05.277 ******** 2026-01-30 03:07:21.167883 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-30 03:07:21.167895 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-30 03:07:21.167906 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-30 03:07:21.167917 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-30 03:07:21.167928 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-30 03:07:21.167939 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-30 03:07:21.167949 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-30 03:07:21.167960 | orchestrator | 2026-01-30 03:07:21.167972 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-30 03:07:21.168000 | orchestrator | Friday 30 January 2026 03:07:04 +0000 (0:00:00.909) 0:00:06.186 ******** 2026-01-30 03:07:21.168012 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 03:07:21.168025 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:07:21.168036 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:07:21.168047 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 03:07:21.168058 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 03:07:21.168069 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 03:07:21.168080 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 03:07:21.168090 | orchestrator | 2026-01-30 03:07:21.168102 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-30 03:07:21.168113 | orchestrator | Friday 30 January 2026 03:07:07 +0000 (0:00:02.966) 0:00:09.153 ******** 2026-01-30 03:07:21.168124 | orchestrator | changed: [testbed-manager] 2026-01-30 03:07:21.168135 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:07:21.168146 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:07:21.168157 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:07:21.168167 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:07:21.168178 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:07:21.168189 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:07:21.168200 | orchestrator | 2026-01-30 03:07:21.168211 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-30 03:07:21.168222 | orchestrator | Friday 30 January 2026 03:07:08 +0000 (0:00:01.449) 0:00:10.602 ******** 2026-01-30 03:07:21.168233 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:07:21.168244 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 03:07:21.168263 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:07:21.168273 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 03:07:21.168285 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 03:07:21.168296 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 03:07:21.168306 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 03:07:21.168317 | orchestrator | 2026-01-30 03:07:21.168346 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-30 03:07:21.168358 | orchestrator | Friday 30 January 2026 03:07:10 +0000 (0:00:01.501) 0:00:12.103 ******** 2026-01-30 03:07:21.168369 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.168380 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.168391 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.168402 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.168413 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.168424 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.168435 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.168445 | orchestrator | 2026-01-30 03:07:21.168456 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-30 03:07:21.168486 | orchestrator | Friday 30 January 2026 03:07:11 +0000 (0:00:01.008) 0:00:13.112 ******** 2026-01-30 03:07:21.168498 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:07:21.168509 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:21.168520 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:21.168531 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:21.168542 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:21.168553 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:21.168564 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:21.168575 | orchestrator | 2026-01-30 03:07:21.168586 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-30 03:07:21.168597 | orchestrator | Friday 30 January 2026 03:07:11 +0000 (0:00:00.553) 0:00:13.666 ******** 2026-01-30 03:07:21.168608 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.168619 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.168630 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.168641 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.168652 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.168663 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.168674 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.168685 | orchestrator | 2026-01-30 03:07:21.168696 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-30 03:07:21.168707 | orchestrator | Friday 30 January 2026 03:07:13 +0000 (0:00:02.055) 0:00:15.721 ******** 2026-01-30 03:07:21.168718 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:21.168729 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:21.168740 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:21.168751 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:21.168762 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:21.168773 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:21.168784 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-30 03:07:21.168797 | orchestrator | 2026-01-30 03:07:21.168808 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-30 03:07:21.168819 | orchestrator | Friday 30 January 2026 03:07:14 +0000 (0:00:00.790) 0:00:16.511 ******** 2026-01-30 03:07:21.168830 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.168841 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:07:21.168852 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:07:21.168863 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:07:21.168874 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:07:21.168884 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:07:21.168895 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:07:21.168906 | orchestrator | 2026-01-30 03:07:21.168917 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-30 03:07:21.168936 | orchestrator | Friday 30 January 2026 03:07:16 +0000 (0:00:01.631) 0:00:18.143 ******** 2026-01-30 03:07:21.168947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:07:21.168960 | orchestrator | 2026-01-30 03:07:21.168971 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-30 03:07:21.168987 | orchestrator | Friday 30 January 2026 03:07:17 +0000 (0:00:01.153) 0:00:19.297 ******** 2026-01-30 03:07:21.168998 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.169009 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.169020 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.169031 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.169042 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.169052 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.169063 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.169074 | orchestrator | 2026-01-30 03:07:21.169085 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-30 03:07:21.169096 | orchestrator | Friday 30 January 2026 03:07:19 +0000 (0:00:02.077) 0:00:21.374 ******** 2026-01-30 03:07:21.169107 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:21.169118 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:21.169129 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:21.169140 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:21.169151 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:21.169161 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:21.169172 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:21.169183 | orchestrator | 2026-01-30 03:07:21.169194 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-30 03:07:21.169205 | orchestrator | Friday 30 January 2026 03:07:19 +0000 (0:00:00.611) 0:00:21.985 ******** 2026-01-30 03:07:21.169216 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169227 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169238 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169249 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169260 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169271 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169282 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169293 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169304 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169314 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169341 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169352 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-30 03:07:21.169363 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169374 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-30 03:07:21.169385 | orchestrator | 2026-01-30 03:07:21.169404 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-30 03:07:36.124061 | orchestrator | Friday 30 January 2026 03:07:21 +0000 (0:00:01.212) 0:00:23.197 ******** 2026-01-30 03:07:36.124150 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:07:36.124160 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:36.124167 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:36.124173 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:36.124180 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:36.124203 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:36.124209 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:36.124215 | orchestrator | 2026-01-30 03:07:36.124222 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-30 03:07:36.124229 | orchestrator | Friday 30 January 2026 03:07:21 +0000 (0:00:00.577) 0:00:23.775 ******** 2026-01-30 03:07:36.124236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-node-1, testbed-manager, testbed-node-4, testbed-node-3, testbed-node-5 2026-01-30 03:07:36.124245 | orchestrator | 2026-01-30 03:07:36.124252 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-30 03:07:36.124258 | orchestrator | Friday 30 January 2026 03:07:26 +0000 (0:00:04.285) 0:00:28.060 ******** 2026-01-30 03:07:36.124265 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124281 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124470 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124483 | orchestrator | 2026-01-30 03:07:36.124490 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-30 03:07:36.124496 | orchestrator | Friday 30 January 2026 03:07:31 +0000 (0:00:05.038) 0:00:33.099 ******** 2026-01-30 03:07:36.124502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124509 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124551 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-30 03:07:36.124564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:36.124592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:41.157394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-30 03:07:41.157495 | orchestrator | 2026-01-30 03:07:41.157509 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-30 03:07:41.157519 | orchestrator | Friday 30 January 2026 03:07:36 +0000 (0:00:05.049) 0:00:38.148 ******** 2026-01-30 03:07:41.157529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:07:41.157537 | orchestrator | 2026-01-30 03:07:41.157544 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-30 03:07:41.157552 | orchestrator | Friday 30 January 2026 03:07:37 +0000 (0:00:01.097) 0:00:39.246 ******** 2026-01-30 03:07:41.157559 | orchestrator | ok: [testbed-manager] 2026-01-30 03:07:41.157567 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:07:41.157574 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:07:41.157582 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:07:41.157589 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:07:41.157596 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:07:41.157603 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:07:41.157610 | orchestrator | 2026-01-30 03:07:41.157627 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-30 03:07:41.157635 | orchestrator | Friday 30 January 2026 03:07:38 +0000 (0:00:00.985) 0:00:40.231 ******** 2026-01-30 03:07:41.157642 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157651 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157658 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157665 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157672 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157680 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157687 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157694 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157715 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:07:41.157724 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157731 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157738 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157763 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157771 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:41.157778 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157786 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157793 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157801 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157808 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:41.157815 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157823 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157830 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157837 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157844 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:41.157851 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157859 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157866 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157882 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157889 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:41.157897 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:41.157906 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-30 03:07:41.157915 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-30 03:07:41.157923 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-30 03:07:41.157932 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-30 03:07:41.157940 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:41.157948 | orchestrator | 2026-01-30 03:07:41.157957 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-30 03:07:41.157979 | orchestrator | Friday 30 January 2026 03:07:39 +0000 (0:00:01.650) 0:00:41.882 ******** 2026-01-30 03:07:41.157987 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:07:41.157996 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:41.158004 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:41.158013 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:41.158068 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:41.158076 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:41.158085 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:41.158093 | orchestrator | 2026-01-30 03:07:41.158101 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-30 03:07:41.158109 | orchestrator | Friday 30 January 2026 03:07:40 +0000 (0:00:00.530) 0:00:42.412 ******** 2026-01-30 03:07:41.158118 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:07:41.158127 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:07:41.158135 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:07:41.158143 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:07:41.158152 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:07:41.158160 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:07:41.158168 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:07:41.158849 | orchestrator | 2026-01-30 03:07:41.158872 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:07:41.158881 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 03:07:41.158901 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158908 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158916 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158923 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158930 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158938 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 03:07:41.158945 | orchestrator | 2026-01-30 03:07:41.158954 | orchestrator | 2026-01-30 03:07:41.158970 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:07:41.158979 | orchestrator | Friday 30 January 2026 03:07:40 +0000 (0:00:00.557) 0:00:42.970 ******** 2026-01-30 03:07:41.158987 | orchestrator | =============================================================================== 2026-01-30 03:07:41.158996 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.05s 2026-01-30 03:07:41.159007 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.04s 2026-01-30 03:07:41.159022 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.29s 2026-01-30 03:07:41.159043 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.97s 2026-01-30 03:07:41.159059 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.08s 2026-01-30 03:07:41.159073 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.06s 2026-01-30 03:07:41.159086 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2026-01-30 03:07:41.159100 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.69s 2026-01-30 03:07:41.159114 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.65s 2026-01-30 03:07:41.159127 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-01-30 03:07:41.159142 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.50s 2026-01-30 03:07:41.159155 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2026-01-30 03:07:41.159170 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2026-01-30 03:07:41.159181 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.15s 2026-01-30 03:07:41.159190 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.10s 2026-01-30 03:07:41.159199 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.01s 2026-01-30 03:07:41.159207 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-01-30 03:07:41.159216 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.96s 2026-01-30 03:07:41.159224 | orchestrator | osism.commons.network : Create required directories --------------------- 0.91s 2026-01-30 03:07:41.159233 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.79s 2026-01-30 03:07:41.336038 | orchestrator | + osism apply wireguard 2026-01-30 03:07:53.172578 | orchestrator | 2026-01-30 03:07:53 | INFO  | Task 3b78598d-7623-4d55-b612-40bbea1f24df (wireguard) was prepared for execution. 2026-01-30 03:07:53.172682 | orchestrator | 2026-01-30 03:07:53 | INFO  | It takes a moment until task 3b78598d-7623-4d55-b612-40bbea1f24df (wireguard) has been started and output is visible here. 2026-01-30 03:08:11.014240 | orchestrator | 2026-01-30 03:08:11.014437 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-30 03:08:11.014459 | orchestrator | 2026-01-30 03:08:11.014471 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-30 03:08:11.014484 | orchestrator | Friday 30 January 2026 03:07:56 +0000 (0:00:00.160) 0:00:00.160 ******** 2026-01-30 03:08:11.014504 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:11.014525 | orchestrator | 2026-01-30 03:08:11.014544 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-30 03:08:11.014568 | orchestrator | Friday 30 January 2026 03:07:58 +0000 (0:00:01.186) 0:00:01.346 ******** 2026-01-30 03:08:11.014585 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.014604 | orchestrator | 2026-01-30 03:08:11.014621 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-30 03:08:11.014640 | orchestrator | Friday 30 January 2026 03:08:03 +0000 (0:00:05.637) 0:00:06.983 ******** 2026-01-30 03:08:11.014660 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.014679 | orchestrator | 2026-01-30 03:08:11.014699 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-30 03:08:11.014719 | orchestrator | Friday 30 January 2026 03:08:04 +0000 (0:00:00.560) 0:00:07.544 ******** 2026-01-30 03:08:11.014732 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.014743 | orchestrator | 2026-01-30 03:08:11.014754 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-30 03:08:11.014768 | orchestrator | Friday 30 January 2026 03:08:04 +0000 (0:00:00.443) 0:00:07.987 ******** 2026-01-30 03:08:11.014785 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:11.014804 | orchestrator | 2026-01-30 03:08:11.014819 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-30 03:08:11.014832 | orchestrator | Friday 30 January 2026 03:08:05 +0000 (0:00:00.647) 0:00:08.635 ******** 2026-01-30 03:08:11.014845 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:11.014859 | orchestrator | 2026-01-30 03:08:11.014871 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-30 03:08:11.014884 | orchestrator | Friday 30 January 2026 03:08:05 +0000 (0:00:00.416) 0:00:09.051 ******** 2026-01-30 03:08:11.014897 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:11.014910 | orchestrator | 2026-01-30 03:08:11.014921 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-30 03:08:11.014932 | orchestrator | Friday 30 January 2026 03:08:06 +0000 (0:00:00.429) 0:00:09.481 ******** 2026-01-30 03:08:11.014943 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.014953 | orchestrator | 2026-01-30 03:08:11.014964 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-30 03:08:11.014975 | orchestrator | Friday 30 January 2026 03:08:07 +0000 (0:00:01.173) 0:00:10.654 ******** 2026-01-30 03:08:11.014986 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-30 03:08:11.014998 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.015008 | orchestrator | 2026-01-30 03:08:11.015019 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-30 03:08:11.015030 | orchestrator | Friday 30 January 2026 03:08:08 +0000 (0:00:00.869) 0:00:11.523 ******** 2026-01-30 03:08:11.015041 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.015052 | orchestrator | 2026-01-30 03:08:11.015063 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-30 03:08:11.015074 | orchestrator | Friday 30 January 2026 03:08:09 +0000 (0:00:01.584) 0:00:13.108 ******** 2026-01-30 03:08:11.015085 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:11.015095 | orchestrator | 2026-01-30 03:08:11.015106 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:08:11.015117 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:08:11.015155 | orchestrator | 2026-01-30 03:08:11.015167 | orchestrator | 2026-01-30 03:08:11.015178 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:08:11.015189 | orchestrator | Friday 30 January 2026 03:08:10 +0000 (0:00:00.904) 0:00:14.012 ******** 2026-01-30 03:08:11.015200 | orchestrator | =============================================================================== 2026-01-30 03:08:11.015211 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.64s 2026-01-30 03:08:11.015228 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.58s 2026-01-30 03:08:11.015247 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-01-30 03:08:11.015265 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-01-30 03:08:11.015283 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2026-01-30 03:08:11.015301 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2026-01-30 03:08:11.015319 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.65s 2026-01-30 03:08:11.015337 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-01-30 03:08:11.015429 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-01-30 03:08:11.015446 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-01-30 03:08:11.015457 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-01-30 03:08:11.278331 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-30 03:08:11.314404 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-30 03:08:11.314501 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-30 03:08:11.388005 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 202 0 --:--:-- --:--:-- --:--:-- 205 2026-01-30 03:08:11.404060 | orchestrator | + osism apply --environment custom workarounds 2026-01-30 03:08:13.324956 | orchestrator | 2026-01-30 03:08:13 | INFO  | Trying to run play workarounds in environment custom 2026-01-30 03:08:23.542080 | orchestrator | 2026-01-30 03:08:23 | INFO  | Task cbd5a0fb-142f-4124-8323-4a18f2e3dede (workarounds) was prepared for execution. 2026-01-30 03:08:23.542197 | orchestrator | 2026-01-30 03:08:23 | INFO  | It takes a moment until task cbd5a0fb-142f-4124-8323-4a18f2e3dede (workarounds) has been started and output is visible here. 2026-01-30 03:08:46.818044 | orchestrator | 2026-01-30 03:08:46.818127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:08:46.818135 | orchestrator | 2026-01-30 03:08:46.818142 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-30 03:08:46.818148 | orchestrator | Friday 30 January 2026 03:08:27 +0000 (0:00:00.092) 0:00:00.093 ******** 2026-01-30 03:08:46.818156 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818163 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818169 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818176 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818183 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818189 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818196 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-30 03:08:46.818202 | orchestrator | 2026-01-30 03:08:46.818208 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-30 03:08:46.818215 | orchestrator | 2026-01-30 03:08:46.818221 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-30 03:08:46.818249 | orchestrator | Friday 30 January 2026 03:08:27 +0000 (0:00:00.548) 0:00:00.641 ******** 2026-01-30 03:08:46.818256 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:46.818264 | orchestrator | 2026-01-30 03:08:46.818271 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-30 03:08:46.818278 | orchestrator | 2026-01-30 03:08:46.818284 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-30 03:08:46.818290 | orchestrator | Friday 30 January 2026 03:08:29 +0000 (0:00:01.968) 0:00:02.609 ******** 2026-01-30 03:08:46.818296 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:08:46.818302 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:08:46.818309 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:08:46.818316 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:08:46.818322 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:08:46.818340 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:08:46.818345 | orchestrator | 2026-01-30 03:08:46.818352 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-30 03:08:46.818358 | orchestrator | 2026-01-30 03:08:46.818364 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-30 03:08:46.818370 | orchestrator | Friday 30 January 2026 03:08:31 +0000 (0:00:01.765) 0:00:04.375 ******** 2026-01-30 03:08:46.818397 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818405 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818412 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818418 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818425 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818431 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-30 03:08:46.818436 | orchestrator | 2026-01-30 03:08:46.818442 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-30 03:08:46.818448 | orchestrator | Friday 30 January 2026 03:08:33 +0000 (0:00:01.558) 0:00:05.933 ******** 2026-01-30 03:08:46.818454 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:08:46.818462 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:08:46.818466 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:08:46.818470 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:08:46.818474 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:08:46.818478 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:08:46.818481 | orchestrator | 2026-01-30 03:08:46.818485 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-30 03:08:46.818489 | orchestrator | Friday 30 January 2026 03:08:36 +0000 (0:00:03.535) 0:00:09.469 ******** 2026-01-30 03:08:46.818493 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:08:46.818497 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:08:46.818500 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:08:46.818504 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:08:46.818508 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:08:46.818512 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:08:46.818516 | orchestrator | 2026-01-30 03:08:46.818519 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-30 03:08:46.818523 | orchestrator | 2026-01-30 03:08:46.818527 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-30 03:08:46.818531 | orchestrator | Friday 30 January 2026 03:08:37 +0000 (0:00:00.624) 0:00:10.093 ******** 2026-01-30 03:08:46.818534 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:08:46.818538 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:08:46.818542 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:08:46.818546 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:08:46.818556 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:08:46.818559 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:46.818563 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:08:46.818567 | orchestrator | 2026-01-30 03:08:46.818570 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-30 03:08:46.818574 | orchestrator | Friday 30 January 2026 03:08:38 +0000 (0:00:01.468) 0:00:11.562 ******** 2026-01-30 03:08:46.818578 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:08:46.818582 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:08:46.818586 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:08:46.818590 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:08:46.818594 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:08:46.818599 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:08:46.818616 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:46.818621 | orchestrator | 2026-01-30 03:08:46.818625 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-30 03:08:46.818630 | orchestrator | Friday 30 January 2026 03:08:40 +0000 (0:00:01.506) 0:00:13.068 ******** 2026-01-30 03:08:46.818635 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:08:46.818639 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:08:46.818644 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:08:46.818648 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:08:46.818653 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:08:46.818657 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:08:46.818662 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:46.818666 | orchestrator | 2026-01-30 03:08:46.818671 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-30 03:08:46.818675 | orchestrator | Friday 30 January 2026 03:08:41 +0000 (0:00:01.482) 0:00:14.550 ******** 2026-01-30 03:08:46.818679 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:08:46.818684 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:08:46.818688 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:08:46.818692 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:08:46.818697 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:08:46.818701 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:08:46.818705 | orchestrator | changed: [testbed-manager] 2026-01-30 03:08:46.818710 | orchestrator | 2026-01-30 03:08:46.818714 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-30 03:08:46.818719 | orchestrator | Friday 30 January 2026 03:08:43 +0000 (0:00:01.706) 0:00:16.256 ******** 2026-01-30 03:08:46.818723 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:08:46.818728 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:08:46.818732 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:08:46.818737 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:08:46.818741 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:08:46.818745 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:08:46.818750 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:08:46.818756 | orchestrator | 2026-01-30 03:08:46.818762 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-30 03:08:46.818769 | orchestrator | 2026-01-30 03:08:46.818776 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-30 03:08:46.818786 | orchestrator | Friday 30 January 2026 03:08:44 +0000 (0:00:00.573) 0:00:16.830 ******** 2026-01-30 03:08:46.818793 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:08:46.818800 | orchestrator | ok: [testbed-manager] 2026-01-30 03:08:46.818807 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:08:46.818813 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:08:46.818819 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:08:46.818826 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:08:46.818833 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:08:46.818839 | orchestrator | 2026-01-30 03:08:46.818845 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:08:46.818851 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:08:46.818863 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818868 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818872 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818877 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818881 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818885 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:08:46.818889 | orchestrator | 2026-01-30 03:08:46.818894 | orchestrator | 2026-01-30 03:08:46.818898 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:08:46.818903 | orchestrator | Friday 30 January 2026 03:08:46 +0000 (0:00:02.654) 0:00:19.485 ******** 2026-01-30 03:08:46.818907 | orchestrator | =============================================================================== 2026-01-30 03:08:46.818911 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.54s 2026-01-30 03:08:46.818916 | orchestrator | Install python3-docker -------------------------------------------------- 2.66s 2026-01-30 03:08:46.818920 | orchestrator | Apply netplan configuration --------------------------------------------- 1.97s 2026-01-30 03:08:46.818925 | orchestrator | Apply netplan configuration --------------------------------------------- 1.77s 2026-01-30 03:08:46.818930 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2026-01-30 03:08:46.818934 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2026-01-30 03:08:46.818938 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.51s 2026-01-30 03:08:46.818943 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2026-01-30 03:08:46.818947 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.47s 2026-01-30 03:08:46.818952 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.62s 2026-01-30 03:08:46.818956 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.57s 2026-01-30 03:08:46.818963 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.55s 2026-01-30 03:08:47.347532 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-30 03:08:59.453589 | orchestrator | 2026-01-30 03:08:59 | INFO  | Task e0283f85-0f85-47f9-ac15-4616c10e8ccd (reboot) was prepared for execution. 2026-01-30 03:08:59.453672 | orchestrator | 2026-01-30 03:08:59 | INFO  | It takes a moment until task e0283f85-0f85-47f9-ac15-4616c10e8ccd (reboot) has been started and output is visible here. 2026-01-30 03:09:08.815992 | orchestrator | 2026-01-30 03:09:08.816111 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.816129 | orchestrator | 2026-01-30 03:09:08.816142 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.816153 | orchestrator | Friday 30 January 2026 03:09:03 +0000 (0:00:00.145) 0:00:00.145 ******** 2026-01-30 03:09:08.816165 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:09:08.816177 | orchestrator | 2026-01-30 03:09:08.816188 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.816199 | orchestrator | Friday 30 January 2026 03:09:03 +0000 (0:00:00.083) 0:00:00.229 ******** 2026-01-30 03:09:08.816235 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:09:08.816246 | orchestrator | 2026-01-30 03:09:08.816257 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.816268 | orchestrator | Friday 30 January 2026 03:09:04 +0000 (0:00:00.904) 0:00:01.133 ******** 2026-01-30 03:09:08.816279 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:09:08.816290 | orchestrator | 2026-01-30 03:09:08.816301 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.816311 | orchestrator | 2026-01-30 03:09:08.816322 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.816333 | orchestrator | Friday 30 January 2026 03:09:04 +0000 (0:00:00.118) 0:00:01.252 ******** 2026-01-30 03:09:08.816344 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:09:08.816355 | orchestrator | 2026-01-30 03:09:08.816380 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.816454 | orchestrator | Friday 30 January 2026 03:09:04 +0000 (0:00:00.101) 0:00:01.353 ******** 2026-01-30 03:09:08.816466 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:09:08.816477 | orchestrator | 2026-01-30 03:09:08.816488 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.816499 | orchestrator | Friday 30 January 2026 03:09:05 +0000 (0:00:00.690) 0:00:02.044 ******** 2026-01-30 03:09:08.816510 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:09:08.816521 | orchestrator | 2026-01-30 03:09:08.816534 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.816548 | orchestrator | 2026-01-30 03:09:08.816562 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.816575 | orchestrator | Friday 30 January 2026 03:09:05 +0000 (0:00:00.093) 0:00:02.137 ******** 2026-01-30 03:09:08.816588 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:09:08.816600 | orchestrator | 2026-01-30 03:09:08.816613 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.816628 | orchestrator | Friday 30 January 2026 03:09:05 +0000 (0:00:00.153) 0:00:02.291 ******** 2026-01-30 03:09:08.816645 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:09:08.816664 | orchestrator | 2026-01-30 03:09:08.816684 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.816697 | orchestrator | Friday 30 January 2026 03:09:06 +0000 (0:00:00.634) 0:00:02.926 ******** 2026-01-30 03:09:08.816711 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:09:08.816724 | orchestrator | 2026-01-30 03:09:08.816737 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.816750 | orchestrator | 2026-01-30 03:09:08.816762 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.816775 | orchestrator | Friday 30 January 2026 03:09:06 +0000 (0:00:00.101) 0:00:03.027 ******** 2026-01-30 03:09:08.816787 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:09:08.816800 | orchestrator | 2026-01-30 03:09:08.816813 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.816825 | orchestrator | Friday 30 January 2026 03:09:06 +0000 (0:00:00.079) 0:00:03.107 ******** 2026-01-30 03:09:08.816839 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:09:08.816851 | orchestrator | 2026-01-30 03:09:08.816864 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.816877 | orchestrator | Friday 30 January 2026 03:09:06 +0000 (0:00:00.560) 0:00:03.667 ******** 2026-01-30 03:09:08.816888 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:09:08.816899 | orchestrator | 2026-01-30 03:09:08.816910 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.816921 | orchestrator | 2026-01-30 03:09:08.816932 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.816943 | orchestrator | Friday 30 January 2026 03:09:06 +0000 (0:00:00.094) 0:00:03.762 ******** 2026-01-30 03:09:08.816953 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:09:08.816973 | orchestrator | 2026-01-30 03:09:08.816984 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.816995 | orchestrator | Friday 30 January 2026 03:09:07 +0000 (0:00:00.086) 0:00:03.848 ******** 2026-01-30 03:09:08.817005 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:09:08.817016 | orchestrator | 2026-01-30 03:09:08.817027 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.817039 | orchestrator | Friday 30 January 2026 03:09:07 +0000 (0:00:00.721) 0:00:04.569 ******** 2026-01-30 03:09:08.817050 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:09:08.817061 | orchestrator | 2026-01-30 03:09:08.817071 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-30 03:09:08.817082 | orchestrator | 2026-01-30 03:09:08.817093 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-30 03:09:08.817104 | orchestrator | Friday 30 January 2026 03:09:07 +0000 (0:00:00.107) 0:00:04.677 ******** 2026-01-30 03:09:08.817115 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:09:08.817125 | orchestrator | 2026-01-30 03:09:08.817136 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-30 03:09:08.817147 | orchestrator | Friday 30 January 2026 03:09:07 +0000 (0:00:00.088) 0:00:04.765 ******** 2026-01-30 03:09:08.817158 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:09:08.817169 | orchestrator | 2026-01-30 03:09:08.817180 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-30 03:09:08.817191 | orchestrator | Friday 30 January 2026 03:09:08 +0000 (0:00:00.632) 0:00:05.398 ******** 2026-01-30 03:09:08.817220 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:09:08.817232 | orchestrator | 2026-01-30 03:09:08.817243 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:09:08.817255 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817267 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817278 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817289 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817300 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817316 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:09:08.817328 | orchestrator | 2026-01-30 03:09:08.817339 | orchestrator | 2026-01-30 03:09:08.817350 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:09:08.817361 | orchestrator | Friday 30 January 2026 03:09:08 +0000 (0:00:00.032) 0:00:05.430 ******** 2026-01-30 03:09:08.817372 | orchestrator | =============================================================================== 2026-01-30 03:09:08.817383 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.14s 2026-01-30 03:09:08.817414 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2026-01-30 03:09:08.817426 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2026-01-30 03:09:08.984868 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-30 03:09:20.978552 | orchestrator | 2026-01-30 03:09:20 | INFO  | Task 38b74709-a91e-40f1-bb76-66c2559c0c2e (wait-for-connection) was prepared for execution. 2026-01-30 03:09:20.978658 | orchestrator | 2026-01-30 03:09:20 | INFO  | It takes a moment until task 38b74709-a91e-40f1-bb76-66c2559c0c2e (wait-for-connection) has been started and output is visible here. 2026-01-30 03:09:36.545000 | orchestrator | 2026-01-30 03:09:36.545112 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-30 03:09:36.545128 | orchestrator | 2026-01-30 03:09:36.545139 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-30 03:09:36.545150 | orchestrator | Friday 30 January 2026 03:09:24 +0000 (0:00:00.165) 0:00:00.165 ******** 2026-01-30 03:09:36.545160 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:09:36.545171 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:09:36.545181 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:09:36.545191 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:09:36.545200 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:09:36.545210 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:09:36.545220 | orchestrator | 2026-01-30 03:09:36.545230 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:09:36.545240 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545252 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545262 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545271 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545282 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545292 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:09:36.545302 | orchestrator | 2026-01-30 03:09:36.545311 | orchestrator | 2026-01-30 03:09:36.545321 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:09:36.545331 | orchestrator | Friday 30 January 2026 03:09:36 +0000 (0:00:11.490) 0:00:11.655 ******** 2026-01-30 03:09:36.545341 | orchestrator | =============================================================================== 2026-01-30 03:09:36.545350 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2026-01-30 03:09:36.881988 | orchestrator | + osism apply hddtemp 2026-01-30 03:09:48.922368 | orchestrator | 2026-01-30 03:09:48 | INFO  | Task 99443f4e-b470-49ef-b5bd-9842c7037555 (hddtemp) was prepared for execution. 2026-01-30 03:09:48.922537 | orchestrator | 2026-01-30 03:09:48 | INFO  | It takes a moment until task 99443f4e-b470-49ef-b5bd-9842c7037555 (hddtemp) has been started and output is visible here. 2026-01-30 03:10:15.635695 | orchestrator | 2026-01-30 03:10:15.635809 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-30 03:10:15.635839 | orchestrator | 2026-01-30 03:10:15.635853 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-30 03:10:15.635865 | orchestrator | Friday 30 January 2026 03:09:52 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-01-30 03:10:15.635876 | orchestrator | ok: [testbed-manager] 2026-01-30 03:10:15.635889 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:10:15.635900 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:10:15.635911 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:10:15.635922 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:10:15.635932 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:10:15.635943 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:10:15.635954 | orchestrator | 2026-01-30 03:10:15.635965 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-30 03:10:15.635976 | orchestrator | Friday 30 January 2026 03:09:53 +0000 (0:00:00.505) 0:00:00.690 ******** 2026-01-30 03:10:15.635988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:10:15.636026 | orchestrator | 2026-01-30 03:10:15.636037 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-30 03:10:15.636048 | orchestrator | Friday 30 January 2026 03:09:54 +0000 (0:00:00.930) 0:00:01.620 ******** 2026-01-30 03:10:15.636059 | orchestrator | ok: [testbed-manager] 2026-01-30 03:10:15.636070 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:10:15.636081 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:10:15.636107 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:10:15.636118 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:10:15.636129 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:10:15.636140 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:10:15.636150 | orchestrator | 2026-01-30 03:10:15.636161 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-30 03:10:15.636172 | orchestrator | Friday 30 January 2026 03:09:56 +0000 (0:00:01.994) 0:00:03.615 ******** 2026-01-30 03:10:15.636183 | orchestrator | changed: [testbed-manager] 2026-01-30 03:10:15.636196 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:10:15.636213 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:10:15.636231 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:10:15.636250 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:10:15.636268 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:10:15.636286 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:10:15.636301 | orchestrator | 2026-01-30 03:10:15.636313 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-30 03:10:15.636324 | orchestrator | Friday 30 January 2026 03:09:57 +0000 (0:00:00.996) 0:00:04.611 ******** 2026-01-30 03:10:15.636335 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:10:15.636346 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:10:15.636356 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:10:15.636367 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:10:15.636378 | orchestrator | ok: [testbed-manager] 2026-01-30 03:10:15.636389 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:10:15.636399 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:10:15.636410 | orchestrator | 2026-01-30 03:10:15.636421 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-30 03:10:15.636467 | orchestrator | Friday 30 January 2026 03:09:58 +0000 (0:00:01.156) 0:00:05.768 ******** 2026-01-30 03:10:15.636483 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:10:15.636500 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:10:15.636519 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:10:15.636537 | orchestrator | changed: [testbed-manager] 2026-01-30 03:10:15.636555 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:10:15.636574 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:10:15.636585 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:10:15.636596 | orchestrator | 2026-01-30 03:10:15.636607 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-30 03:10:15.636617 | orchestrator | Friday 30 January 2026 03:09:58 +0000 (0:00:00.771) 0:00:06.540 ******** 2026-01-30 03:10:15.636628 | orchestrator | changed: [testbed-manager] 2026-01-30 03:10:15.636639 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:10:15.636649 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:10:15.636660 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:10:15.636671 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:10:15.636681 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:10:15.636692 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:10:15.636703 | orchestrator | 2026-01-30 03:10:15.636714 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-30 03:10:15.636724 | orchestrator | Friday 30 January 2026 03:10:12 +0000 (0:00:13.309) 0:00:19.850 ******** 2026-01-30 03:10:15.636736 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:10:15.636759 | orchestrator | 2026-01-30 03:10:15.636770 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-30 03:10:15.636781 | orchestrator | Friday 30 January 2026 03:10:13 +0000 (0:00:01.154) 0:00:21.004 ******** 2026-01-30 03:10:15.636792 | orchestrator | changed: [testbed-manager] 2026-01-30 03:10:15.636803 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:10:15.636814 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:10:15.636825 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:10:15.636835 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:10:15.636846 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:10:15.636856 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:10:15.636867 | orchestrator | 2026-01-30 03:10:15.636878 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:10:15.636889 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:10:15.636921 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636933 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636944 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636955 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636966 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636977 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:10:15.636988 | orchestrator | 2026-01-30 03:10:15.636998 | orchestrator | 2026-01-30 03:10:15.637009 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:10:15.637020 | orchestrator | Friday 30 January 2026 03:10:15 +0000 (0:00:01.834) 0:00:22.839 ******** 2026-01-30 03:10:15.637031 | orchestrator | =============================================================================== 2026-01-30 03:10:15.637048 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.31s 2026-01-30 03:10:15.637060 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.99s 2026-01-30 03:10:15.637071 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2026-01-30 03:10:15.637081 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.16s 2026-01-30 03:10:15.637092 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.15s 2026-01-30 03:10:15.637103 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.00s 2026-01-30 03:10:15.637114 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.93s 2026-01-30 03:10:15.637125 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.77s 2026-01-30 03:10:15.637136 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.51s 2026-01-30 03:10:15.896948 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-30 03:10:15.948401 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 03:10:15.948582 | orchestrator | + sudo systemctl restart manager.service 2026-01-30 03:10:29.509241 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 03:10:29.509389 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-30 03:10:29.509419 | orchestrator | + local max_attempts=60 2026-01-30 03:10:29.509550 | orchestrator | + local name=ceph-ansible 2026-01-30 03:10:29.509572 | orchestrator | + local attempt_num=1 2026-01-30 03:10:29.509611 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:29.572397 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:29.572545 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:29.572562 | orchestrator | + sleep 5 2026-01-30 03:10:34.577260 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:34.604276 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:34.604348 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:34.604353 | orchestrator | + sleep 5 2026-01-30 03:10:39.606870 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:39.637022 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:39.637126 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:39.637141 | orchestrator | + sleep 5 2026-01-30 03:10:44.640936 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:44.677289 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:44.677386 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:44.677398 | orchestrator | + sleep 5 2026-01-30 03:10:49.681860 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:49.716311 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:49.716393 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:49.716402 | orchestrator | + sleep 5 2026-01-30 03:10:54.720874 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:54.760487 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:54.760590 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:54.760605 | orchestrator | + sleep 5 2026-01-30 03:10:59.765417 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:10:59.807852 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:10:59.807958 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:10:59.807973 | orchestrator | + sleep 5 2026-01-30 03:11:04.814568 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:04.889794 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:04.889885 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:04.889900 | orchestrator | + sleep 5 2026-01-30 03:11:09.895244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:09.929240 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:09.929373 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:09.929399 | orchestrator | + sleep 5 2026-01-30 03:11:14.932486 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:14.969380 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:14.969567 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:14.969618 | orchestrator | + sleep 5 2026-01-30 03:11:19.974208 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:20.011165 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:20.011263 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:20.011280 | orchestrator | + sleep 5 2026-01-30 03:11:25.016643 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:25.057721 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:25.057851 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:25.057875 | orchestrator | + sleep 5 2026-01-30 03:11:30.062594 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:30.100534 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:30.100644 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-30 03:11:30.100660 | orchestrator | + sleep 5 2026-01-30 03:11:35.105848 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-30 03:11:35.147061 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:35.147168 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-30 03:11:35.147185 | orchestrator | + local max_attempts=60 2026-01-30 03:11:35.147197 | orchestrator | + local name=kolla-ansible 2026-01-30 03:11:35.147208 | orchestrator | + local attempt_num=1 2026-01-30 03:11:35.148036 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-30 03:11:35.178724 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:35.178854 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-30 03:11:35.178876 | orchestrator | + local max_attempts=60 2026-01-30 03:11:35.178888 | orchestrator | + local name=osism-ansible 2026-01-30 03:11:35.178899 | orchestrator | + local attempt_num=1 2026-01-30 03:11:35.179645 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-30 03:11:35.207171 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 03:11:35.207259 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-30 03:11:35.207272 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-30 03:11:35.360923 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-30 03:11:35.513179 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-30 03:11:35.669021 | orchestrator | ARA in osism-ansible already disabled. 2026-01-30 03:11:35.816329 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-30 03:11:35.817500 | orchestrator | + osism apply gather-facts 2026-01-30 03:11:47.694629 | orchestrator | 2026-01-30 03:11:47 | INFO  | Task b76ba15e-8d42-4c3c-9cd9-a6d185e9bac7 (gather-facts) was prepared for execution. 2026-01-30 03:11:47.694751 | orchestrator | 2026-01-30 03:11:47 | INFO  | It takes a moment until task b76ba15e-8d42-4c3c-9cd9-a6d185e9bac7 (gather-facts) has been started and output is visible here. 2026-01-30 03:11:59.831083 | orchestrator | 2026-01-30 03:11:59.831222 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 03:11:59.831241 | orchestrator | 2026-01-30 03:11:59.831255 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 03:11:59.831267 | orchestrator | Friday 30 January 2026 03:11:51 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-01-30 03:11:59.831278 | orchestrator | ok: [testbed-manager] 2026-01-30 03:11:59.831290 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:11:59.831301 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:11:59.831312 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:11:59.831323 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:11:59.831334 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:11:59.831344 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:11:59.831355 | orchestrator | 2026-01-30 03:11:59.831366 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-30 03:11:59.831377 | orchestrator | 2026-01-30 03:11:59.831389 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-30 03:11:59.831400 | orchestrator | Friday 30 January 2026 03:11:58 +0000 (0:00:07.900) 0:00:08.058 ******** 2026-01-30 03:11:59.831410 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:11:59.831422 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:11:59.831433 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:11:59.831444 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:11:59.831455 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:11:59.831465 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:11:59.831476 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:11:59.831540 | orchestrator | 2026-01-30 03:11:59.831555 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:11:59.831566 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831579 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831590 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831601 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831612 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831623 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831660 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:11:59.831673 | orchestrator | 2026-01-30 03:11:59.831685 | orchestrator | 2026-01-30 03:11:59.831698 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:11:59.831710 | orchestrator | Friday 30 January 2026 03:11:59 +0000 (0:00:00.499) 0:00:08.558 ******** 2026-01-30 03:11:59.831722 | orchestrator | =============================================================================== 2026-01-30 03:11:59.831735 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.90s 2026-01-30 03:11:59.831748 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-01-30 03:12:00.101260 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-30 03:12:00.113264 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-30 03:12:00.124917 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-30 03:12:00.135003 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-30 03:12:00.145012 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-30 03:12:00.154810 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-01-30 03:12:00.164481 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-30 03:12:00.174242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-30 03:12:00.184266 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-30 03:12:00.197856 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-01-30 03:12:00.207737 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-30 03:12:00.217560 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-30 03:12:00.227673 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-30 03:12:00.238784 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-30 03:12:00.248803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-01-30 03:12:00.258730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-30 03:12:00.274978 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-30 03:12:00.287606 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-30 03:12:00.298479 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-30 03:12:00.308455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-30 03:12:00.318642 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-30 03:12:00.332092 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-30 03:12:00.343284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-30 03:12:00.355432 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-30 03:12:00.441660 | orchestrator | ok: Runtime: 0:22:57.233805 2026-01-30 03:12:00.517175 | 2026-01-30 03:12:00.517289 | TASK [Deploy services] 2026-01-30 03:12:01.212260 | orchestrator | 2026-01-30 03:12:01.212486 | orchestrator | # DEPLOY SERVICES 2026-01-30 03:12:01.212553 | orchestrator | 2026-01-30 03:12:01.212569 | orchestrator | + set -e 2026-01-30 03:12:01.212582 | orchestrator | + echo 2026-01-30 03:12:01.212596 | orchestrator | + echo '# DEPLOY SERVICES' 2026-01-30 03:12:01.212611 | orchestrator | + echo 2026-01-30 03:12:01.212656 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 03:12:01.212679 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 03:12:01.212695 | orchestrator | ++ INTERACTIVE=false 2026-01-30 03:12:01.212707 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 03:12:01.212729 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 03:12:01.212740 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 03:12:01.212756 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 03:12:01.212767 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 03:12:01.212785 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 03:12:01.212796 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 03:12:01.212810 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 03:12:01.212822 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 03:12:01.212837 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 03:12:01.212848 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 03:12:01.212859 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 03:12:01.212872 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 03:12:01.212882 | orchestrator | ++ export ARA=false 2026-01-30 03:12:01.212894 | orchestrator | ++ ARA=false 2026-01-30 03:12:01.212919 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 03:12:01.212931 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 03:12:01.212942 | orchestrator | ++ export TEMPEST=false 2026-01-30 03:12:01.212953 | orchestrator | ++ TEMPEST=false 2026-01-30 03:12:01.212964 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 03:12:01.212974 | orchestrator | ++ IS_ZUUL=true 2026-01-30 03:12:01.212986 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:12:01.212997 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:12:01.213008 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 03:12:01.213019 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 03:12:01.213030 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 03:12:01.213041 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 03:12:01.213052 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 03:12:01.213063 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 03:12:01.213074 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 03:12:01.213093 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 03:12:01.213104 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-01-30 03:12:01.221047 | orchestrator | + set -e 2026-01-30 03:12:01.222534 | orchestrator | 2026-01-30 03:12:01.222606 | orchestrator | # PULL IMAGES 2026-01-30 03:12:01.222625 | orchestrator | 2026-01-30 03:12:01.222644 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 03:12:01.222664 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 03:12:01.222682 | orchestrator | ++ INTERACTIVE=false 2026-01-30 03:12:01.222701 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 03:12:01.222719 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 03:12:01.222737 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 03:12:01.222755 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 03:12:01.222774 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 03:12:01.222793 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 03:12:01.222811 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 03:12:01.222830 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 03:12:01.222847 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 03:12:01.222866 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 03:12:01.222885 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 03:12:01.222903 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 03:12:01.222922 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 03:12:01.222940 | orchestrator | ++ export ARA=false 2026-01-30 03:12:01.222958 | orchestrator | ++ ARA=false 2026-01-30 03:12:01.222977 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 03:12:01.222996 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 03:12:01.223015 | orchestrator | ++ export TEMPEST=false 2026-01-30 03:12:01.223034 | orchestrator | ++ TEMPEST=false 2026-01-30 03:12:01.223052 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 03:12:01.223070 | orchestrator | ++ IS_ZUUL=true 2026-01-30 03:12:01.223088 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:12:01.223107 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:12:01.223126 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 03:12:01.223144 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 03:12:01.223162 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 03:12:01.223181 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 03:12:01.223227 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 03:12:01.223247 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 03:12:01.223265 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 03:12:01.223284 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 03:12:01.223302 | orchestrator | + echo 2026-01-30 03:12:01.223321 | orchestrator | + echo '# PULL IMAGES' 2026-01-30 03:12:01.223341 | orchestrator | + echo 2026-01-30 03:12:01.223367 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-30 03:12:01.271376 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 03:12:01.271448 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-30 03:12:03.169333 | orchestrator | 2026-01-30 03:12:03 | INFO  | Trying to run play pull-images in environment custom 2026-01-30 03:12:13.292635 | orchestrator | 2026-01-30 03:12:13 | INFO  | Task ae7d3dde-7c14-4f06-9725-96c02f846679 (pull-images) was prepared for execution. 2026-01-30 03:12:13.292768 | orchestrator | 2026-01-30 03:12:13 | INFO  | Task ae7d3dde-7c14-4f06-9725-96c02f846679 is running in background. No more output. Check ARA for logs. 2026-01-30 03:12:13.480870 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-01-30 03:12:25.338163 | orchestrator | 2026-01-30 03:12:25 | INFO  | Task 4a221d1c-b3a7-4da6-91f7-1650892199d0 (cgit) was prepared for execution. 2026-01-30 03:12:25.338295 | orchestrator | 2026-01-30 03:12:25 | INFO  | Task 4a221d1c-b3a7-4da6-91f7-1650892199d0 is running in background. No more output. Check ARA for logs. 2026-01-30 03:12:37.426902 | orchestrator | 2026-01-30 03:12:37 | INFO  | Task d4dc208f-bcaa-4f50-be20-2a05f9524a7c (dotfiles) was prepared for execution. 2026-01-30 03:12:37.427096 | orchestrator | 2026-01-30 03:12:37 | INFO  | Task d4dc208f-bcaa-4f50-be20-2a05f9524a7c is running in background. No more output. Check ARA for logs. 2026-01-30 03:12:49.684196 | orchestrator | 2026-01-30 03:12:49 | INFO  | Task 9e5f8cc5-2059-4861-8c4d-de8d086673f8 (homer) was prepared for execution. 2026-01-30 03:12:49.684337 | orchestrator | 2026-01-30 03:12:49 | INFO  | Task 9e5f8cc5-2059-4861-8c4d-de8d086673f8 is running in background. No more output. Check ARA for logs. 2026-01-30 03:13:02.077661 | orchestrator | 2026-01-30 03:13:02 | INFO  | Task 91a81d08-9621-4f87-ac48-239f47e653dd (phpmyadmin) was prepared for execution. 2026-01-30 03:13:02.077779 | orchestrator | 2026-01-30 03:13:02 | INFO  | Task 91a81d08-9621-4f87-ac48-239f47e653dd is running in background. No more output. Check ARA for logs. 2026-01-30 03:13:14.458013 | orchestrator | 2026-01-30 03:13:14 | INFO  | Task 7266d019-2dff-4cf8-8022-fbe2039be96c (sosreport) was prepared for execution. 2026-01-30 03:13:14.458194 | orchestrator | 2026-01-30 03:13:14 | INFO  | Task 7266d019-2dff-4cf8-8022-fbe2039be96c is running in background. No more output. Check ARA for logs. 2026-01-30 03:13:14.770484 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-01-30 03:13:14.775690 | orchestrator | + set -e 2026-01-30 03:13:14.775765 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 03:13:14.775780 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 03:13:14.775791 | orchestrator | ++ INTERACTIVE=false 2026-01-30 03:13:14.775802 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 03:13:14.775811 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 03:13:14.775821 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 03:13:14.775836 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 03:13:14.775851 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 03:13:14.775866 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 03:13:14.775881 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 03:13:14.775893 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 03:13:14.775902 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 03:13:14.775911 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 03:13:14.775920 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 03:13:14.775929 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 03:13:14.775938 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 03:13:14.775947 | orchestrator | ++ export ARA=false 2026-01-30 03:13:14.775956 | orchestrator | ++ ARA=false 2026-01-30 03:13:14.775966 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 03:13:14.775999 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 03:13:14.776008 | orchestrator | ++ export TEMPEST=false 2026-01-30 03:13:14.776017 | orchestrator | ++ TEMPEST=false 2026-01-30 03:13:14.776026 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 03:13:14.776034 | orchestrator | ++ IS_ZUUL=true 2026-01-30 03:13:14.776058 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:13:14.776072 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 03:13:14.776082 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 03:13:14.776090 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 03:13:14.776099 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 03:13:14.776108 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 03:13:14.776117 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 03:13:14.776135 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 03:13:14.776144 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 03:13:14.776153 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 03:13:14.776228 | orchestrator | ++ semver 9.5.0 8.0.3 2026-01-30 03:13:14.821269 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 03:13:14.821359 | orchestrator | + osism apply frr 2026-01-30 03:13:27.181083 | orchestrator | 2026-01-30 03:13:27 | INFO  | Task 0144ec3e-305d-4e65-a5bd-1c54c4f79a6e (frr) was prepared for execution. 2026-01-30 03:13:27.181232 | orchestrator | 2026-01-30 03:13:27 | INFO  | It takes a moment until task 0144ec3e-305d-4e65-a5bd-1c54c4f79a6e (frr) has been started and output is visible here. 2026-01-30 03:13:53.245669 | orchestrator | 2026-01-30 03:13:53.245796 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-30 03:13:53.245817 | orchestrator | 2026-01-30 03:13:53.245830 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-30 03:13:53.245851 | orchestrator | Friday 30 January 2026 03:13:32 +0000 (0:00:00.321) 0:00:00.321 ******** 2026-01-30 03:13:53.245870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 03:13:53.245891 | orchestrator | 2026-01-30 03:13:53.245909 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-30 03:13:53.245928 | orchestrator | Friday 30 January 2026 03:13:32 +0000 (0:00:00.190) 0:00:00.512 ******** 2026-01-30 03:13:53.245944 | orchestrator | changed: [testbed-manager] 2026-01-30 03:13:53.245962 | orchestrator | 2026-01-30 03:13:53.245981 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-30 03:13:53.246004 | orchestrator | Friday 30 January 2026 03:13:33 +0000 (0:00:01.403) 0:00:01.915 ******** 2026-01-30 03:13:53.246086 | orchestrator | changed: [testbed-manager] 2026-01-30 03:13:53.246100 | orchestrator | 2026-01-30 03:13:53.246114 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-30 03:13:53.246127 | orchestrator | Friday 30 January 2026 03:13:44 +0000 (0:00:10.813) 0:00:12.728 ******** 2026-01-30 03:13:53.246140 | orchestrator | ok: [testbed-manager] 2026-01-30 03:13:53.246153 | orchestrator | 2026-01-30 03:13:53.246165 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-30 03:13:53.246178 | orchestrator | Friday 30 January 2026 03:13:45 +0000 (0:00:00.876) 0:00:13.604 ******** 2026-01-30 03:13:53.246191 | orchestrator | changed: [testbed-manager] 2026-01-30 03:13:53.246203 | orchestrator | 2026-01-30 03:13:53.246215 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-30 03:13:53.246228 | orchestrator | Friday 30 January 2026 03:13:46 +0000 (0:00:00.718) 0:00:14.323 ******** 2026-01-30 03:13:53.246240 | orchestrator | ok: [testbed-manager] 2026-01-30 03:13:53.246253 | orchestrator | 2026-01-30 03:13:53.246265 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-30 03:13:53.246279 | orchestrator | Friday 30 January 2026 03:13:47 +0000 (0:00:01.044) 0:00:15.368 ******** 2026-01-30 03:13:53.246291 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:13:53.246304 | orchestrator | 2026-01-30 03:13:53.246316 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-30 03:13:53.246329 | orchestrator | Friday 30 January 2026 03:13:47 +0000 (0:00:00.116) 0:00:15.484 ******** 2026-01-30 03:13:53.246379 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:13:53.246400 | orchestrator | 2026-01-30 03:13:53.246418 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-30 03:13:53.246437 | orchestrator | Friday 30 January 2026 03:13:47 +0000 (0:00:00.125) 0:00:15.610 ******** 2026-01-30 03:13:53.246456 | orchestrator | changed: [testbed-manager] 2026-01-30 03:13:53.246476 | orchestrator | 2026-01-30 03:13:53.246495 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-30 03:13:53.246512 | orchestrator | Friday 30 January 2026 03:13:48 +0000 (0:00:00.814) 0:00:16.424 ******** 2026-01-30 03:13:53.246523 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-30 03:13:53.246534 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-30 03:13:53.246547 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-30 03:13:53.246602 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-30 03:13:53.246620 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-30 03:13:53.246638 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-30 03:13:53.246656 | orchestrator | 2026-01-30 03:13:53.246668 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-30 03:13:53.246679 | orchestrator | Friday 30 January 2026 03:13:50 +0000 (0:00:02.019) 0:00:18.444 ******** 2026-01-30 03:13:53.246689 | orchestrator | ok: [testbed-manager] 2026-01-30 03:13:53.246700 | orchestrator | 2026-01-30 03:13:53.246711 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-30 03:13:53.246722 | orchestrator | Friday 30 January 2026 03:13:51 +0000 (0:00:01.431) 0:00:19.875 ******** 2026-01-30 03:13:53.246732 | orchestrator | changed: [testbed-manager] 2026-01-30 03:13:53.246743 | orchestrator | 2026-01-30 03:13:53.246754 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:13:53.246765 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:13:53.246776 | orchestrator | 2026-01-30 03:13:53.246787 | orchestrator | 2026-01-30 03:13:53.246806 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:13:53.246817 | orchestrator | Friday 30 January 2026 03:13:53 +0000 (0:00:01.139) 0:00:21.015 ******** 2026-01-30 03:13:53.246828 | orchestrator | =============================================================================== 2026-01-30 03:13:53.246839 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.81s 2026-01-30 03:13:53.246849 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.02s 2026-01-30 03:13:53.246860 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.43s 2026-01-30 03:13:53.246871 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.40s 2026-01-30 03:13:53.246882 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.14s 2026-01-30 03:13:53.246924 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.04s 2026-01-30 03:13:53.246944 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.88s 2026-01-30 03:13:53.246962 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.81s 2026-01-30 03:13:53.246980 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.72s 2026-01-30 03:13:53.247000 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.19s 2026-01-30 03:13:53.247018 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-01-30 03:13:53.247036 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.12s 2026-01-30 03:13:53.426292 | orchestrator | + osism apply kubernetes 2026-01-30 03:13:55.226475 | orchestrator | 2026-01-30 03:13:55 | INFO  | Task 9218d9bd-a9e2-4fdd-83b8-fd669fe566a7 (kubernetes) was prepared for execution. 2026-01-30 03:13:55.226555 | orchestrator | 2026-01-30 03:13:55 | INFO  | It takes a moment until task 9218d9bd-a9e2-4fdd-83b8-fd669fe566a7 (kubernetes) has been started and output is visible here. 2026-01-30 03:14:18.252432 | orchestrator | 2026-01-30 03:14:18.252521 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-30 03:14:18.252532 | orchestrator | 2026-01-30 03:14:18.252538 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-30 03:14:18.252545 | orchestrator | Friday 30 January 2026 03:13:59 +0000 (0:00:00.136) 0:00:00.136 ******** 2026-01-30 03:14:18.252550 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:14:18.252556 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:14:18.252562 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:14:18.252567 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:14:18.252573 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:14:18.252625 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:14:18.252631 | orchestrator | 2026-01-30 03:14:18.252637 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-30 03:14:18.252642 | orchestrator | Friday 30 January 2026 03:13:59 +0000 (0:00:00.561) 0:00:00.697 ******** 2026-01-30 03:14:18.252648 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.252654 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.252660 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.252667 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.252677 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.252685 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.252692 | orchestrator | 2026-01-30 03:14:18.252700 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-30 03:14:18.252710 | orchestrator | Friday 30 January 2026 03:14:00 +0000 (0:00:00.462) 0:00:01.160 ******** 2026-01-30 03:14:18.252718 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.252726 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.252734 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.252742 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.252749 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.252758 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.252767 | orchestrator | 2026-01-30 03:14:18.252776 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-30 03:14:18.252785 | orchestrator | Friday 30 January 2026 03:14:00 +0000 (0:00:00.552) 0:00:01.713 ******** 2026-01-30 03:14:18.252793 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:14:18.252798 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:14:18.252804 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:14:18.252813 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:14:18.252818 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:14:18.252823 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:14:18.252829 | orchestrator | 2026-01-30 03:14:18.252834 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-30 03:14:18.252840 | orchestrator | Friday 30 January 2026 03:14:02 +0000 (0:00:02.094) 0:00:03.807 ******** 2026-01-30 03:14:18.252845 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:14:18.252851 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:14:18.252856 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:14:18.252861 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:14:18.252866 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:14:18.252871 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:14:18.252876 | orchestrator | 2026-01-30 03:14:18.252882 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-30 03:14:18.252887 | orchestrator | Friday 30 January 2026 03:14:03 +0000 (0:00:00.968) 0:00:04.776 ******** 2026-01-30 03:14:18.252892 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:14:18.252914 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:14:18.252919 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:14:18.252924 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:14:18.252929 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:14:18.252934 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:14:18.252939 | orchestrator | 2026-01-30 03:14:18.252951 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-30 03:14:18.252957 | orchestrator | Friday 30 January 2026 03:14:05 +0000 (0:00:01.830) 0:00:06.607 ******** 2026-01-30 03:14:18.252963 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.252969 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.252975 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.252980 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.252986 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.252991 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.252997 | orchestrator | 2026-01-30 03:14:18.253003 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-30 03:14:18.253009 | orchestrator | Friday 30 January 2026 03:14:06 +0000 (0:00:00.523) 0:00:07.131 ******** 2026-01-30 03:14:18.253014 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253020 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253026 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253031 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253037 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253043 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253049 | orchestrator | 2026-01-30 03:14:18.253054 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-30 03:14:18.253060 | orchestrator | Friday 30 January 2026 03:14:06 +0000 (0:00:00.742) 0:00:07.873 ******** 2026-01-30 03:14:18.253067 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253073 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253078 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253084 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253089 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253094 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253099 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253104 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253109 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253114 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253133 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253138 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253143 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253148 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253153 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253159 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 03:14:18.253174 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 03:14:18.253179 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253184 | orchestrator | 2026-01-30 03:14:18.253190 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-30 03:14:18.253195 | orchestrator | Friday 30 January 2026 03:14:07 +0000 (0:00:00.822) 0:00:08.696 ******** 2026-01-30 03:14:18.253200 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253211 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253217 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253227 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253232 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253237 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253242 | orchestrator | 2026-01-30 03:14:18.253247 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-30 03:14:18.253253 | orchestrator | Friday 30 January 2026 03:14:08 +0000 (0:00:01.147) 0:00:09.843 ******** 2026-01-30 03:14:18.253258 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:14:18.253263 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:14:18.253268 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:14:18.253274 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:14:18.253279 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:14:18.253284 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:14:18.253289 | orchestrator | 2026-01-30 03:14:18.253294 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-30 03:14:18.253299 | orchestrator | Friday 30 January 2026 03:14:09 +0000 (0:00:00.674) 0:00:10.517 ******** 2026-01-30 03:14:18.253304 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:14:18.253309 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:14:18.253314 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:14:18.253319 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:14:18.253324 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:14:18.253329 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:14:18.253334 | orchestrator | 2026-01-30 03:14:18.253339 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-30 03:14:18.253349 | orchestrator | Friday 30 January 2026 03:14:15 +0000 (0:00:06.085) 0:00:16.603 ******** 2026-01-30 03:14:18.253358 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253371 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253381 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253390 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253399 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253407 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253416 | orchestrator | 2026-01-30 03:14:18.253423 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-30 03:14:18.253428 | orchestrator | Friday 30 January 2026 03:14:16 +0000 (0:00:00.632) 0:00:17.235 ******** 2026-01-30 03:14:18.253433 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253438 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253443 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253448 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253453 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253458 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253463 | orchestrator | 2026-01-30 03:14:18.253468 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-30 03:14:18.253475 | orchestrator | Friday 30 January 2026 03:14:17 +0000 (0:00:01.044) 0:00:18.280 ******** 2026-01-30 03:14:18.253480 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253485 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253490 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253495 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253500 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253505 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253510 | orchestrator | 2026-01-30 03:14:18.253515 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-30 03:14:18.253520 | orchestrator | Friday 30 January 2026 03:14:17 +0000 (0:00:00.427) 0:00:18.708 ******** 2026-01-30 03:14:18.253525 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-30 03:14:18.253535 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-30 03:14:18.253540 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:14:18.253545 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-30 03:14:18.253555 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-30 03:14:18.253560 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:14:18.253564 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-30 03:14:18.253570 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-30 03:14:18.253590 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:14:18.253596 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-30 03:14:18.253601 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-30 03:14:18.253606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:14:18.253611 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-30 03:14:18.253616 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-30 03:14:18.253621 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:14:18.253626 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-30 03:14:18.253631 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-30 03:14:18.253636 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:14:18.253641 | orchestrator | 2026-01-30 03:14:18.253646 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-30 03:14:18.253656 | orchestrator | Friday 30 January 2026 03:14:18 +0000 (0:00:00.574) 0:00:19.282 ******** 2026-01-30 03:15:29.662949 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:15:29.663077 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:15:29.663101 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:15:29.663114 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.663132 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.663142 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.663153 | orchestrator | 2026-01-30 03:15:29.663164 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-30 03:15:29.663175 | orchestrator | Friday 30 January 2026 03:14:18 +0000 (0:00:00.398) 0:00:19.681 ******** 2026-01-30 03:15:29.663185 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:15:29.663196 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:15:29.663211 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:15:29.663225 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.663235 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.663245 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.663254 | orchestrator | 2026-01-30 03:15:29.663267 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-30 03:15:29.663285 | orchestrator | 2026-01-30 03:15:29.663301 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-30 03:15:29.663313 | orchestrator | Friday 30 January 2026 03:14:19 +0000 (0:00:00.795) 0:00:20.476 ******** 2026-01-30 03:15:29.663323 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.663334 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.663343 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.663357 | orchestrator | 2026-01-30 03:15:29.663372 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-30 03:15:29.663382 | orchestrator | Friday 30 January 2026 03:14:20 +0000 (0:00:00.684) 0:00:21.161 ******** 2026-01-30 03:15:29.663392 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.663401 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.663411 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.663423 | orchestrator | 2026-01-30 03:15:29.663439 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-30 03:15:29.663450 | orchestrator | Friday 30 January 2026 03:14:21 +0000 (0:00:01.188) 0:00:22.349 ******** 2026-01-30 03:15:29.663460 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.663502 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.663521 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.663538 | orchestrator | 2026-01-30 03:15:29.663555 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-30 03:15:29.663630 | orchestrator | Friday 30 January 2026 03:14:22 +0000 (0:00:00.772) 0:00:23.122 ******** 2026-01-30 03:15:29.663648 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.663659 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.663675 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.663691 | orchestrator | 2026-01-30 03:15:29.663706 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-30 03:15:29.663723 | orchestrator | Friday 30 January 2026 03:14:22 +0000 (0:00:00.576) 0:00:23.698 ******** 2026-01-30 03:15:29.663742 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.663754 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.663765 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.663777 | orchestrator | 2026-01-30 03:15:29.663794 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-30 03:15:29.663828 | orchestrator | Friday 30 January 2026 03:14:22 +0000 (0:00:00.279) 0:00:23.978 ******** 2026-01-30 03:15:29.663841 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.663852 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:15:29.663863 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:15:29.663873 | orchestrator | 2026-01-30 03:15:29.663885 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-30 03:15:29.663903 | orchestrator | Friday 30 January 2026 03:14:23 +0000 (0:00:00.807) 0:00:24.786 ******** 2026-01-30 03:15:29.663915 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:15:29.663925 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.663935 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:15:29.663946 | orchestrator | 2026-01-30 03:15:29.663963 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-30 03:15:29.663974 | orchestrator | Friday 30 January 2026 03:14:24 +0000 (0:00:01.246) 0:00:26.032 ******** 2026-01-30 03:15:29.663984 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:15:29.663993 | orchestrator | 2026-01-30 03:15:29.664003 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-30 03:15:29.664012 | orchestrator | Friday 30 January 2026 03:14:25 +0000 (0:00:00.491) 0:00:26.523 ******** 2026-01-30 03:15:29.664025 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.664041 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.664055 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.664065 | orchestrator | 2026-01-30 03:15:29.664074 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-30 03:15:29.664084 | orchestrator | Friday 30 January 2026 03:14:26 +0000 (0:00:01.328) 0:00:27.852 ******** 2026-01-30 03:15:29.664093 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.664103 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.664112 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.664122 | orchestrator | 2026-01-30 03:15:29.664131 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-30 03:15:29.664141 | orchestrator | Friday 30 January 2026 03:14:27 +0000 (0:00:00.516) 0:00:28.368 ******** 2026-01-30 03:15:29.664151 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.664160 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.664175 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.664192 | orchestrator | 2026-01-30 03:15:29.664208 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-30 03:15:29.664222 | orchestrator | Friday 30 January 2026 03:14:28 +0000 (0:00:00.745) 0:00:29.113 ******** 2026-01-30 03:15:29.664238 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.664253 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.664269 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.664286 | orchestrator | 2026-01-30 03:15:29.664303 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-30 03:15:29.664345 | orchestrator | Friday 30 January 2026 03:14:29 +0000 (0:00:01.259) 0:00:30.373 ******** 2026-01-30 03:15:29.664364 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.664398 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.664415 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.664431 | orchestrator | 2026-01-30 03:15:29.664449 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-30 03:15:29.664465 | orchestrator | Friday 30 January 2026 03:14:29 +0000 (0:00:00.322) 0:00:30.695 ******** 2026-01-30 03:15:29.664539 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.664557 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.664572 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.664589 | orchestrator | 2026-01-30 03:15:29.664606 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-30 03:15:29.664623 | orchestrator | Friday 30 January 2026 03:14:30 +0000 (0:00:00.490) 0:00:31.186 ******** 2026-01-30 03:15:29.664635 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:15:29.664644 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:15:29.664654 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:15:29.664663 | orchestrator | 2026-01-30 03:15:29.664682 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-30 03:15:29.664692 | orchestrator | Friday 30 January 2026 03:14:31 +0000 (0:00:01.120) 0:00:32.306 ******** 2026-01-30 03:15:29.664702 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.664712 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.664722 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.664731 | orchestrator | 2026-01-30 03:15:29.664741 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-30 03:15:29.664751 | orchestrator | Friday 30 January 2026 03:14:34 +0000 (0:00:03.060) 0:00:35.367 ******** 2026-01-30 03:15:29.664780 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.664795 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.664812 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.664829 | orchestrator | 2026-01-30 03:15:29.664839 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-30 03:15:29.664849 | orchestrator | Friday 30 January 2026 03:14:34 +0000 (0:00:00.394) 0:00:35.762 ******** 2026-01-30 03:15:29.664859 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 03:15:29.664871 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 03:15:29.664880 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 03:15:29.664890 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 03:15:29.664906 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 03:15:29.664920 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 03:15:29.664929 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-30 03:15:29.664939 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-30 03:15:29.664949 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-30 03:15:29.664958 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-30 03:15:29.664968 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-30 03:15:29.664987 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-30 03:15:29.664997 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-30 03:15:29.665007 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-30 03:15:29.665016 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-01-30 03:15:29.665031 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:15:29.665047 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:15:29.665064 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:15:29.665074 | orchestrator | 2026-01-30 03:15:29.665096 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-30 03:15:29.665117 | orchestrator | Friday 30 January 2026 03:15:28 +0000 (0:00:53.717) 0:01:29.480 ******** 2026-01-30 03:15:29.665141 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:15:29.665156 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:15:29.665172 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:15:29.665186 | orchestrator | 2026-01-30 03:15:29.665201 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-30 03:15:29.665216 | orchestrator | Friday 30 January 2026 03:15:28 +0000 (0:00:00.288) 0:01:29.769 ******** 2026-01-30 03:15:29.665246 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.571643 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.571784 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.571803 | orchestrator | 2026-01-30 03:16:09.571817 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-30 03:16:09.571830 | orchestrator | Friday 30 January 2026 03:15:29 +0000 (0:00:00.938) 0:01:30.707 ******** 2026-01-30 03:16:09.571875 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.571889 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.571902 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.571914 | orchestrator | 2026-01-30 03:16:09.571926 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-30 03:16:09.571938 | orchestrator | Friday 30 January 2026 03:15:30 +0000 (0:00:01.069) 0:01:31.777 ******** 2026-01-30 03:16:09.571950 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.571962 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.571973 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.571985 | orchestrator | 2026-01-30 03:16:09.571997 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-30 03:16:09.572009 | orchestrator | Friday 30 January 2026 03:15:56 +0000 (0:00:25.660) 0:01:57.437 ******** 2026-01-30 03:16:09.572020 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.572033 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.572044 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.572056 | orchestrator | 2026-01-30 03:16:09.572067 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-30 03:16:09.572079 | orchestrator | Friday 30 January 2026 03:15:56 +0000 (0:00:00.576) 0:01:58.013 ******** 2026-01-30 03:16:09.572091 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.572103 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.572114 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.572126 | orchestrator | 2026-01-30 03:16:09.572137 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-30 03:16:09.572149 | orchestrator | Friday 30 January 2026 03:15:57 +0000 (0:00:00.584) 0:01:58.598 ******** 2026-01-30 03:16:09.572161 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.572175 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.572189 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.572201 | orchestrator | 2026-01-30 03:16:09.572316 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-30 03:16:09.572355 | orchestrator | Friday 30 January 2026 03:15:58 +0000 (0:00:00.578) 0:01:59.177 ******** 2026-01-30 03:16:09.572367 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.572378 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.572389 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.572400 | orchestrator | 2026-01-30 03:16:09.572411 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-30 03:16:09.572451 | orchestrator | Friday 30 January 2026 03:15:58 +0000 (0:00:00.698) 0:01:59.875 ******** 2026-01-30 03:16:09.572463 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.572474 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.572485 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.572496 | orchestrator | 2026-01-30 03:16:09.572507 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-30 03:16:09.572519 | orchestrator | Friday 30 January 2026 03:15:59 +0000 (0:00:00.257) 0:02:00.133 ******** 2026-01-30 03:16:09.572530 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.572541 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.572552 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.572613 | orchestrator | 2026-01-30 03:16:09.572626 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-30 03:16:09.572638 | orchestrator | Friday 30 January 2026 03:15:59 +0000 (0:00:00.575) 0:02:00.708 ******** 2026-01-30 03:16:09.572649 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.572660 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.572671 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.572683 | orchestrator | 2026-01-30 03:16:09.572694 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-30 03:16:09.572705 | orchestrator | Friday 30 January 2026 03:16:00 +0000 (0:00:00.572) 0:02:01.280 ******** 2026-01-30 03:16:09.572716 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.572727 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.572738 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.572750 | orchestrator | 2026-01-30 03:16:09.572761 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-30 03:16:09.572772 | orchestrator | Friday 30 January 2026 03:16:01 +0000 (0:00:00.781) 0:02:02.062 ******** 2026-01-30 03:16:09.572786 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:16:09.572797 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:16:09.572808 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:16:09.572820 | orchestrator | 2026-01-30 03:16:09.572831 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-30 03:16:09.572842 | orchestrator | Friday 30 January 2026 03:16:01 +0000 (0:00:00.922) 0:02:02.984 ******** 2026-01-30 03:16:09.572853 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:16:09.572865 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:16:09.572876 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:16:09.572887 | orchestrator | 2026-01-30 03:16:09.572898 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-30 03:16:09.572909 | orchestrator | Friday 30 January 2026 03:16:02 +0000 (0:00:00.258) 0:02:03.243 ******** 2026-01-30 03:16:09.572920 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:16:09.572931 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:16:09.572942 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:16:09.572953 | orchestrator | 2026-01-30 03:16:09.572964 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-30 03:16:09.572975 | orchestrator | Friday 30 January 2026 03:16:02 +0000 (0:00:00.229) 0:02:03.473 ******** 2026-01-30 03:16:09.572986 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.573026 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.573037 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.573048 | orchestrator | 2026-01-30 03:16:09.573060 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-30 03:16:09.573071 | orchestrator | Friday 30 January 2026 03:16:03 +0000 (0:00:00.602) 0:02:04.075 ******** 2026-01-30 03:16:09.573090 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:16:09.573102 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:16:09.573133 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:16:09.573145 | orchestrator | 2026-01-30 03:16:09.573157 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-30 03:16:09.573170 | orchestrator | Friday 30 January 2026 03:16:03 +0000 (0:00:00.790) 0:02:04.865 ******** 2026-01-30 03:16:09.573182 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 03:16:09.573193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 03:16:09.573205 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 03:16:09.573215 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 03:16:09.573226 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 03:16:09.573237 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 03:16:09.573249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 03:16:09.573282 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 03:16:09.573294 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 03:16:09.573305 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-30 03:16:09.573316 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 03:16:09.573327 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 03:16:09.573339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-30 03:16:09.573350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 03:16:09.573361 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 03:16:09.573372 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 03:16:09.573383 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 03:16:09.573395 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 03:16:09.573406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 03:16:09.573417 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 03:16:09.573428 | orchestrator | 2026-01-30 03:16:09.573439 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-30 03:16:09.573450 | orchestrator | 2026-01-30 03:16:09.573461 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-30 03:16:09.573473 | orchestrator | Friday 30 January 2026 03:16:06 +0000 (0:00:03.067) 0:02:07.933 ******** 2026-01-30 03:16:09.573484 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:16:09.573495 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:16:09.573506 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:16:09.573517 | orchestrator | 2026-01-30 03:16:09.573542 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-30 03:16:09.573554 | orchestrator | Friday 30 January 2026 03:16:07 +0000 (0:00:00.304) 0:02:08.237 ******** 2026-01-30 03:16:09.573565 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:16:09.573576 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:16:09.573587 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:16:09.573606 | orchestrator | 2026-01-30 03:16:09.573617 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-30 03:16:09.573628 | orchestrator | Friday 30 January 2026 03:16:07 +0000 (0:00:00.748) 0:02:08.985 ******** 2026-01-30 03:16:09.573639 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:16:09.573650 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:16:09.573661 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:16:09.573672 | orchestrator | 2026-01-30 03:16:09.573683 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-30 03:16:09.573695 | orchestrator | Friday 30 January 2026 03:16:08 +0000 (0:00:00.314) 0:02:09.299 ******** 2026-01-30 03:16:09.573706 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:16:09.573717 | orchestrator | 2026-01-30 03:16:09.573729 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-30 03:16:09.573740 | orchestrator | Friday 30 January 2026 03:16:08 +0000 (0:00:00.434) 0:02:09.734 ******** 2026-01-30 03:16:09.573751 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:16:09.573762 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:16:09.573773 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:16:09.573784 | orchestrator | 2026-01-30 03:16:09.573795 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-30 03:16:09.573806 | orchestrator | Friday 30 January 2026 03:16:09 +0000 (0:00:00.424) 0:02:10.159 ******** 2026-01-30 03:16:09.573817 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:16:09.573829 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:16:09.573840 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:16:09.573851 | orchestrator | 2026-01-30 03:16:09.573862 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-30 03:16:09.573873 | orchestrator | Friday 30 January 2026 03:16:09 +0000 (0:00:00.295) 0:02:10.454 ******** 2026-01-30 03:16:09.573891 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:17:40.287289 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:17:40.287433 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:17:40.287447 | orchestrator | 2026-01-30 03:17:40.287457 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-30 03:17:40.287467 | orchestrator | Friday 30 January 2026 03:16:09 +0000 (0:00:00.282) 0:02:10.736 ******** 2026-01-30 03:17:40.287475 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:17:40.287484 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:17:40.287525 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:17:40.287535 | orchestrator | 2026-01-30 03:17:40.287543 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-30 03:17:40.287552 | orchestrator | Friday 30 January 2026 03:16:10 +0000 (0:00:00.597) 0:02:11.334 ******** 2026-01-30 03:17:40.287560 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:17:40.287568 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:17:40.287577 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:17:40.287587 | orchestrator | 2026-01-30 03:17:40.287634 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-30 03:17:40.287650 | orchestrator | Friday 30 January 2026 03:16:11 +0000 (0:00:01.277) 0:02:12.611 ******** 2026-01-30 03:17:40.287664 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:17:40.287677 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:17:40.287689 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:17:40.287702 | orchestrator | 2026-01-30 03:17:40.287717 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-30 03:17:40.287738 | orchestrator | Friday 30 January 2026 03:16:12 +0000 (0:00:01.207) 0:02:13.819 ******** 2026-01-30 03:17:40.287753 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:17:40.287767 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:17:40.287781 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:17:40.287793 | orchestrator | 2026-01-30 03:17:40.287806 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-30 03:17:40.287876 | orchestrator | 2026-01-30 03:17:40.287892 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-30 03:17:40.287906 | orchestrator | Friday 30 January 2026 03:16:22 +0000 (0:00:09.974) 0:02:23.793 ******** 2026-01-30 03:17:40.287921 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.287935 | orchestrator | 2026-01-30 03:17:40.287949 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-30 03:17:40.287964 | orchestrator | Friday 30 January 2026 03:16:23 +0000 (0:00:00.719) 0:02:24.513 ******** 2026-01-30 03:17:40.287978 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.287992 | orchestrator | 2026-01-30 03:17:40.288004 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-30 03:17:40.288014 | orchestrator | Friday 30 January 2026 03:16:23 +0000 (0:00:00.491) 0:02:25.005 ******** 2026-01-30 03:17:40.288024 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-30 03:17:40.288034 | orchestrator | 2026-01-30 03:17:40.288043 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-30 03:17:40.288057 | orchestrator | Friday 30 January 2026 03:16:24 +0000 (0:00:00.508) 0:02:25.513 ******** 2026-01-30 03:17:40.288070 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288084 | orchestrator | 2026-01-30 03:17:40.288096 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-30 03:17:40.288109 | orchestrator | Friday 30 January 2026 03:16:25 +0000 (0:00:00.792) 0:02:26.306 ******** 2026-01-30 03:17:40.288121 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288133 | orchestrator | 2026-01-30 03:17:40.288147 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-30 03:17:40.288161 | orchestrator | Friday 30 January 2026 03:16:25 +0000 (0:00:00.514) 0:02:26.820 ******** 2026-01-30 03:17:40.288175 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-30 03:17:40.288186 | orchestrator | 2026-01-30 03:17:40.288194 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-30 03:17:40.288207 | orchestrator | Friday 30 January 2026 03:16:27 +0000 (0:00:01.379) 0:02:28.200 ******** 2026-01-30 03:17:40.288221 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-30 03:17:40.288234 | orchestrator | 2026-01-30 03:17:40.288273 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-30 03:17:40.288288 | orchestrator | Friday 30 January 2026 03:16:27 +0000 (0:00:00.775) 0:02:28.975 ******** 2026-01-30 03:17:40.288302 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288316 | orchestrator | 2026-01-30 03:17:40.288329 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-30 03:17:40.288342 | orchestrator | Friday 30 January 2026 03:16:28 +0000 (0:00:00.445) 0:02:29.421 ******** 2026-01-30 03:17:40.288356 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288370 | orchestrator | 2026-01-30 03:17:40.288383 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-30 03:17:40.288394 | orchestrator | 2026-01-30 03:17:40.288402 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-30 03:17:40.288411 | orchestrator | Friday 30 January 2026 03:16:28 +0000 (0:00:00.434) 0:02:29.855 ******** 2026-01-30 03:17:40.288419 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.288427 | orchestrator | 2026-01-30 03:17:40.288435 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-30 03:17:40.288443 | orchestrator | Friday 30 January 2026 03:16:28 +0000 (0:00:00.144) 0:02:29.999 ******** 2026-01-30 03:17:40.288451 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 03:17:40.288461 | orchestrator | 2026-01-30 03:17:40.288469 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-30 03:17:40.288477 | orchestrator | Friday 30 January 2026 03:16:29 +0000 (0:00:00.393) 0:02:30.393 ******** 2026-01-30 03:17:40.288485 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.288493 | orchestrator | 2026-01-30 03:17:40.288510 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-30 03:17:40.288518 | orchestrator | Friday 30 January 2026 03:16:30 +0000 (0:00:00.797) 0:02:31.190 ******** 2026-01-30 03:17:40.288526 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.288534 | orchestrator | 2026-01-30 03:17:40.288562 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-30 03:17:40.288571 | orchestrator | Friday 30 January 2026 03:16:31 +0000 (0:00:01.224) 0:02:32.415 ******** 2026-01-30 03:17:40.288579 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288587 | orchestrator | 2026-01-30 03:17:40.288595 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-30 03:17:40.288603 | orchestrator | Friday 30 January 2026 03:16:32 +0000 (0:00:00.721) 0:02:33.137 ******** 2026-01-30 03:17:40.288611 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.288619 | orchestrator | 2026-01-30 03:17:40.288627 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-30 03:17:40.288635 | orchestrator | Friday 30 January 2026 03:16:32 +0000 (0:00:00.438) 0:02:33.575 ******** 2026-01-30 03:17:40.288643 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288651 | orchestrator | 2026-01-30 03:17:40.288659 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-30 03:17:40.288667 | orchestrator | Friday 30 January 2026 03:16:39 +0000 (0:00:06.641) 0:02:40.217 ******** 2026-01-30 03:17:40.288675 | orchestrator | changed: [testbed-manager] 2026-01-30 03:17:40.288683 | orchestrator | 2026-01-30 03:17:40.288691 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-30 03:17:40.288699 | orchestrator | Friday 30 January 2026 03:16:49 +0000 (0:00:10.163) 0:02:50.381 ******** 2026-01-30 03:17:40.288707 | orchestrator | ok: [testbed-manager] 2026-01-30 03:17:40.288715 | orchestrator | 2026-01-30 03:17:40.288723 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-30 03:17:40.288730 | orchestrator | 2026-01-30 03:17:40.288739 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-30 03:17:40.288747 | orchestrator | Friday 30 January 2026 03:16:49 +0000 (0:00:00.578) 0:02:50.959 ******** 2026-01-30 03:17:40.288755 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:17:40.288763 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:17:40.288771 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:17:40.288779 | orchestrator | 2026-01-30 03:17:40.288787 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-30 03:17:40.288795 | orchestrator | Friday 30 January 2026 03:16:50 +0000 (0:00:00.247) 0:02:51.207 ******** 2026-01-30 03:17:40.288802 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.288867 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:17:40.288878 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:17:40.288886 | orchestrator | 2026-01-30 03:17:40.288894 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-30 03:17:40.288902 | orchestrator | Friday 30 January 2026 03:16:50 +0000 (0:00:00.269) 0:02:51.476 ******** 2026-01-30 03:17:40.288910 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:17:40.288918 | orchestrator | 2026-01-30 03:17:40.288927 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-30 03:17:40.288935 | orchestrator | Friday 30 January 2026 03:16:50 +0000 (0:00:00.439) 0:02:51.916 ******** 2026-01-30 03:17:40.288947 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 03:17:40.288961 | orchestrator | 2026-01-30 03:17:40.288974 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-30 03:17:40.288988 | orchestrator | Friday 30 January 2026 03:16:51 +0000 (0:00:00.798) 0:02:52.714 ******** 2026-01-30 03:17:40.289002 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:17:40.289010 | orchestrator | 2026-01-30 03:17:40.289018 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-30 03:17:40.289033 | orchestrator | Friday 30 January 2026 03:16:52 +0000 (0:00:00.790) 0:02:53.505 ******** 2026-01-30 03:17:40.289040 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.289047 | orchestrator | 2026-01-30 03:17:40.289054 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-30 03:17:40.289060 | orchestrator | Friday 30 January 2026 03:16:52 +0000 (0:00:00.129) 0:02:53.635 ******** 2026-01-30 03:17:40.289067 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:17:40.289074 | orchestrator | 2026-01-30 03:17:40.289080 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-30 03:17:40.289087 | orchestrator | Friday 30 January 2026 03:16:53 +0000 (0:00:00.965) 0:02:54.600 ******** 2026-01-30 03:17:40.289094 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.289100 | orchestrator | 2026-01-30 03:17:40.289107 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-30 03:17:40.289114 | orchestrator | Friday 30 January 2026 03:16:53 +0000 (0:00:00.117) 0:02:54.718 ******** 2026-01-30 03:17:40.289120 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.289127 | orchestrator | 2026-01-30 03:17:40.289134 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-30 03:17:40.289140 | orchestrator | Friday 30 January 2026 03:16:53 +0000 (0:00:00.112) 0:02:54.830 ******** 2026-01-30 03:17:40.289147 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.289154 | orchestrator | 2026-01-30 03:17:40.289160 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-30 03:17:40.289172 | orchestrator | Friday 30 January 2026 03:16:53 +0000 (0:00:00.112) 0:02:54.943 ******** 2026-01-30 03:17:40.289179 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:17:40.289186 | orchestrator | 2026-01-30 03:17:40.289193 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-30 03:17:40.289200 | orchestrator | Friday 30 January 2026 03:16:53 +0000 (0:00:00.106) 0:02:55.050 ******** 2026-01-30 03:17:40.289206 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 03:17:40.289213 | orchestrator | 2026-01-30 03:17:40.289220 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-30 03:17:40.289227 | orchestrator | Friday 30 January 2026 03:16:58 +0000 (0:00:04.630) 0:02:59.681 ******** 2026-01-30 03:17:40.289233 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-30 03:17:40.289240 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-30 03:17:40.289254 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-30 03:18:00.055999 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-30 03:18:00.056115 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-30 03:18:00.056148 | orchestrator | 2026-01-30 03:18:00.056172 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-30 03:18:00.056193 | orchestrator | Friday 30 January 2026 03:17:40 +0000 (0:00:41.647) 0:03:41.328 ******** 2026-01-30 03:18:00.056213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:18:00.056233 | orchestrator | 2026-01-30 03:18:00.056254 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-30 03:18:00.056273 | orchestrator | Friday 30 January 2026 03:17:41 +0000 (0:00:01.047) 0:03:42.376 ******** 2026-01-30 03:18:00.056296 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 03:18:00.056316 | orchestrator | 2026-01-30 03:18:00.056335 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-30 03:18:00.056355 | orchestrator | Friday 30 January 2026 03:17:42 +0000 (0:00:01.341) 0:03:43.717 ******** 2026-01-30 03:18:00.056375 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 03:18:00.056397 | orchestrator | 2026-01-30 03:18:00.056419 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-30 03:18:00.056442 | orchestrator | Friday 30 January 2026 03:17:43 +0000 (0:00:01.059) 0:03:44.777 ******** 2026-01-30 03:18:00.056493 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:18:00.056518 | orchestrator | 2026-01-30 03:18:00.056538 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-30 03:18:00.056563 | orchestrator | Friday 30 January 2026 03:17:43 +0000 (0:00:00.111) 0:03:44.888 ******** 2026-01-30 03:18:00.056586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-30 03:18:00.056609 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-30 03:18:00.056631 | orchestrator | 2026-01-30 03:18:00.056650 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-30 03:18:00.056670 | orchestrator | Friday 30 January 2026 03:17:45 +0000 (0:00:01.601) 0:03:46.490 ******** 2026-01-30 03:18:00.056689 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:18:00.056711 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:18:00.056820 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:18:00.056841 | orchestrator | 2026-01-30 03:18:00.056862 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-30 03:18:00.056883 | orchestrator | Friday 30 January 2026 03:17:45 +0000 (0:00:00.283) 0:03:46.774 ******** 2026-01-30 03:18:00.056906 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:18:00.056919 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:18:00.056929 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:18:00.056938 | orchestrator | 2026-01-30 03:18:00.056948 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-30 03:18:00.056958 | orchestrator | 2026-01-30 03:18:00.056968 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-30 03:18:00.056978 | orchestrator | Friday 30 January 2026 03:17:46 +0000 (0:00:00.765) 0:03:47.539 ******** 2026-01-30 03:18:00.056988 | orchestrator | ok: [testbed-manager] 2026-01-30 03:18:00.056997 | orchestrator | 2026-01-30 03:18:00.057008 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-30 03:18:00.057018 | orchestrator | Friday 30 January 2026 03:17:46 +0000 (0:00:00.237) 0:03:47.777 ******** 2026-01-30 03:18:00.057028 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 03:18:00.057045 | orchestrator | 2026-01-30 03:18:00.057062 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-30 03:18:00.057078 | orchestrator | Friday 30 January 2026 03:17:46 +0000 (0:00:00.198) 0:03:47.975 ******** 2026-01-30 03:18:00.057094 | orchestrator | changed: [testbed-manager] 2026-01-30 03:18:00.057111 | orchestrator | 2026-01-30 03:18:00.057128 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-30 03:18:00.057143 | orchestrator | 2026-01-30 03:18:00.057160 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-30 03:18:00.057174 | orchestrator | Friday 30 January 2026 03:17:51 +0000 (0:00:04.704) 0:03:52.679 ******** 2026-01-30 03:18:00.057190 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:18:00.057207 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:18:00.057222 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:18:00.057236 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:18:00.057251 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:18:00.057265 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:18:00.057279 | orchestrator | 2026-01-30 03:18:00.057294 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-30 03:18:00.057309 | orchestrator | Friday 30 January 2026 03:17:52 +0000 (0:00:00.503) 0:03:53.183 ******** 2026-01-30 03:18:00.057325 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 03:18:00.057341 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 03:18:00.057355 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 03:18:00.057371 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 03:18:00.057404 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 03:18:00.057421 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 03:18:00.057436 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 03:18:00.057451 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 03:18:00.057467 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 03:18:00.057510 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 03:18:00.057528 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 03:18:00.057545 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 03:18:00.057561 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 03:18:00.057577 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 03:18:00.057593 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 03:18:00.057616 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 03:18:00.057626 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 03:18:00.057636 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 03:18:00.057645 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 03:18:00.057655 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 03:18:00.057664 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 03:18:00.057674 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 03:18:00.057684 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 03:18:00.057694 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 03:18:00.057703 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 03:18:00.057713 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 03:18:00.057750 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 03:18:00.057760 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 03:18:00.057770 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 03:18:00.057780 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 03:18:00.057790 | orchestrator | 2026-01-30 03:18:00.057800 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-30 03:18:00.057810 | orchestrator | Friday 30 January 2026 03:17:59 +0000 (0:00:06.958) 0:04:00.142 ******** 2026-01-30 03:18:00.057820 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:18:00.057829 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:18:00.057839 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:18:00.057849 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:18:00.057859 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:18:00.057869 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:18:00.057878 | orchestrator | 2026-01-30 03:18:00.057888 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-30 03:18:00.057898 | orchestrator | Friday 30 January 2026 03:17:59 +0000 (0:00:00.431) 0:04:00.573 ******** 2026-01-30 03:18:00.057908 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:18:00.057926 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:18:00.057936 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:18:00.057946 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:18:00.057956 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:18:00.057965 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:18:00.057975 | orchestrator | 2026-01-30 03:18:00.057985 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:18:00.057996 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:18:00.058009 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-30 03:18:00.058082 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 03:18:00.058103 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 03:18:00.058116 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 03:18:00.058133 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 03:18:00.058146 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 03:18:00.058159 | orchestrator | 2026-01-30 03:18:00.058173 | orchestrator | 2026-01-30 03:18:00.058197 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:18:00.058216 | orchestrator | Friday 30 January 2026 03:18:00 +0000 (0:00:00.506) 0:04:01.080 ******** 2026-01-30 03:18:00.058246 | orchestrator | =============================================================================== 2026-01-30 03:18:00.249314 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.72s 2026-01-30 03:18:00.249417 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 41.65s 2026-01-30 03:18:00.249432 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.66s 2026-01-30 03:18:00.249444 | orchestrator | kubectl : Install required packages ------------------------------------ 10.16s 2026-01-30 03:18:00.249456 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.97s 2026-01-30 03:18:00.249467 | orchestrator | Manage labels ----------------------------------------------------------- 6.96s 2026-01-30 03:18:00.249478 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.64s 2026-01-30 03:18:00.249489 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.09s 2026-01-30 03:18:00.249500 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.70s 2026-01-30 03:18:00.249511 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.63s 2026-01-30 03:18:00.249522 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2026-01-30 03:18:00.249536 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.06s 2026-01-30 03:18:00.249547 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.09s 2026-01-30 03:18:00.249558 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.83s 2026-01-30 03:18:00.249569 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.60s 2026-01-30 03:18:00.249580 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.38s 2026-01-30 03:18:00.249591 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.34s 2026-01-30 03:18:00.249629 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.33s 2026-01-30 03:18:00.249641 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.28s 2026-01-30 03:18:00.249652 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.26s 2026-01-30 03:18:00.436041 | orchestrator | + osism apply copy-kubeconfig 2026-01-30 03:18:12.503027 | orchestrator | 2026-01-30 03:18:12 | INFO  | Task 42b0beab-04cd-4e3d-b9a7-17ae269b3cf5 (copy-kubeconfig) was prepared for execution. 2026-01-30 03:18:12.503137 | orchestrator | 2026-01-30 03:18:12 | INFO  | It takes a moment until task 42b0beab-04cd-4e3d-b9a7-17ae269b3cf5 (copy-kubeconfig) has been started and output is visible here. 2026-01-30 03:18:18.727496 | orchestrator | 2026-01-30 03:18:18.727578 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-30 03:18:18.727588 | orchestrator | 2026-01-30 03:18:18.727594 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-30 03:18:18.727600 | orchestrator | Friday 30 January 2026 03:18:16 +0000 (0:00:00.139) 0:00:00.139 ******** 2026-01-30 03:18:18.727607 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-30 03:18:18.727613 | orchestrator | 2026-01-30 03:18:18.727619 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-30 03:18:18.727625 | orchestrator | Friday 30 January 2026 03:18:17 +0000 (0:00:00.672) 0:00:00.812 ******** 2026-01-30 03:18:18.727682 | orchestrator | changed: [testbed-manager] 2026-01-30 03:18:18.727691 | orchestrator | 2026-01-30 03:18:18.727697 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-30 03:18:18.727703 | orchestrator | Friday 30 January 2026 03:18:18 +0000 (0:00:01.028) 0:00:01.840 ******** 2026-01-30 03:18:18.727712 | orchestrator | changed: [testbed-manager] 2026-01-30 03:18:18.727718 | orchestrator | 2026-01-30 03:18:18.727726 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:18:18.727732 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:18:18.727740 | orchestrator | 2026-01-30 03:18:18.727745 | orchestrator | 2026-01-30 03:18:18.727751 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:18:18.727756 | orchestrator | Friday 30 January 2026 03:18:18 +0000 (0:00:00.366) 0:00:02.206 ******** 2026-01-30 03:18:18.727762 | orchestrator | =============================================================================== 2026-01-30 03:18:18.727768 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.03s 2026-01-30 03:18:18.727773 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.67s 2026-01-30 03:18:18.727779 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.37s 2026-01-30 03:18:18.905190 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-01-30 03:18:30.753175 | orchestrator | 2026-01-30 03:18:30 | INFO  | Task 159f54cf-23c8-4211-8876-6fbd1845f4cd (openstackclient) was prepared for execution. 2026-01-30 03:18:30.753286 | orchestrator | 2026-01-30 03:18:30 | INFO  | It takes a moment until task 159f54cf-23c8-4211-8876-6fbd1845f4cd (openstackclient) has been started and output is visible here. 2026-01-30 03:19:14.732438 | orchestrator | 2026-01-30 03:19:14.732548 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-30 03:19:14.732568 | orchestrator | 2026-01-30 03:19:14.732583 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-30 03:19:14.732597 | orchestrator | Friday 30 January 2026 03:18:34 +0000 (0:00:00.217) 0:00:00.217 ******** 2026-01-30 03:19:14.732611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-30 03:19:14.732624 | orchestrator | 2026-01-30 03:19:14.732655 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-30 03:19:14.732664 | orchestrator | Friday 30 January 2026 03:18:35 +0000 (0:00:00.208) 0:00:00.425 ******** 2026-01-30 03:19:14.732672 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-30 03:19:14.732682 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-30 03:19:14.732690 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-30 03:19:14.732698 | orchestrator | 2026-01-30 03:19:14.732706 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-30 03:19:14.732714 | orchestrator | Friday 30 January 2026 03:18:36 +0000 (0:00:01.190) 0:00:01.616 ******** 2026-01-30 03:19:14.732722 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:14.732733 | orchestrator | 2026-01-30 03:19:14.732746 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-30 03:19:14.732759 | orchestrator | Friday 30 January 2026 03:18:37 +0000 (0:00:01.327) 0:00:02.943 ******** 2026-01-30 03:19:14.732772 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-30 03:19:14.732786 | orchestrator | ok: [testbed-manager] 2026-01-30 03:19:14.732800 | orchestrator | 2026-01-30 03:19:14.732814 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-30 03:19:14.732826 | orchestrator | Friday 30 January 2026 03:19:10 +0000 (0:00:32.786) 0:00:35.730 ******** 2026-01-30 03:19:14.732840 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:14.732850 | orchestrator | 2026-01-30 03:19:14.732858 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-30 03:19:14.732866 | orchestrator | Friday 30 January 2026 03:19:11 +0000 (0:00:00.823) 0:00:36.553 ******** 2026-01-30 03:19:14.732874 | orchestrator | ok: [testbed-manager] 2026-01-30 03:19:14.732882 | orchestrator | 2026-01-30 03:19:14.732890 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-30 03:19:14.732898 | orchestrator | Friday 30 January 2026 03:19:11 +0000 (0:00:00.551) 0:00:37.105 ******** 2026-01-30 03:19:14.732906 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:14.732914 | orchestrator | 2026-01-30 03:19:14.732922 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-30 03:19:14.732930 | orchestrator | Friday 30 January 2026 03:19:13 +0000 (0:00:01.265) 0:00:38.370 ******** 2026-01-30 03:19:14.732938 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:14.732946 | orchestrator | 2026-01-30 03:19:14.733010 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-30 03:19:14.733025 | orchestrator | Friday 30 January 2026 03:19:13 +0000 (0:00:00.607) 0:00:38.977 ******** 2026-01-30 03:19:14.733040 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:14.733053 | orchestrator | 2026-01-30 03:19:14.733067 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-30 03:19:14.733082 | orchestrator | Friday 30 January 2026 03:19:14 +0000 (0:00:00.512) 0:00:39.490 ******** 2026-01-30 03:19:14.733096 | orchestrator | ok: [testbed-manager] 2026-01-30 03:19:14.733109 | orchestrator | 2026-01-30 03:19:14.733117 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:19:14.733125 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:19:14.733135 | orchestrator | 2026-01-30 03:19:14.733143 | orchestrator | 2026-01-30 03:19:14.733150 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:19:14.733159 | orchestrator | Friday 30 January 2026 03:19:14 +0000 (0:00:00.360) 0:00:39.851 ******** 2026-01-30 03:19:14.733167 | orchestrator | =============================================================================== 2026-01-30 03:19:14.733175 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.79s 2026-01-30 03:19:14.733183 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.33s 2026-01-30 03:19:14.733200 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.27s 2026-01-30 03:19:14.733208 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.19s 2026-01-30 03:19:14.733216 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.82s 2026-01-30 03:19:14.733224 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.61s 2026-01-30 03:19:14.733231 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.55s 2026-01-30 03:19:14.733239 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.51s 2026-01-30 03:19:14.733247 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.36s 2026-01-30 03:19:14.733255 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.21s 2026-01-30 03:19:16.791717 | orchestrator | 2026-01-30 03:19:16 | INFO  | Task d24e8cf3-2b7f-4c5d-9616-8047a04c804c (common) was prepared for execution. 2026-01-30 03:19:16.791830 | orchestrator | 2026-01-30 03:19:16 | INFO  | It takes a moment until task d24e8cf3-2b7f-4c5d-9616-8047a04c804c (common) has been started and output is visible here. 2026-01-30 03:19:27.105064 | orchestrator | 2026-01-30 03:19:27.105233 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-30 03:19:27.105250 | orchestrator | 2026-01-30 03:19:27.105257 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 03:19:27.105265 | orchestrator | Friday 30 January 2026 03:19:20 +0000 (0:00:00.201) 0:00:00.201 ******** 2026-01-30 03:19:27.105274 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:19:27.105283 | orchestrator | 2026-01-30 03:19:27.105291 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-30 03:19:27.105296 | orchestrator | Friday 30 January 2026 03:19:21 +0000 (0:00:00.907) 0:00:01.108 ******** 2026-01-30 03:19:27.105300 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105304 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105310 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105315 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105323 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105327 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105331 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105335 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105338 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 03:19:27.105380 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105386 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105391 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105395 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105399 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105403 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105407 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 03:19:27.105411 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105429 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105433 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105437 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105441 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 03:19:27.105445 | orchestrator | 2026-01-30 03:19:27.105449 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 03:19:27.105455 | orchestrator | Friday 30 January 2026 03:19:23 +0000 (0:00:02.293) 0:00:03.402 ******** 2026-01-30 03:19:27.105461 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:19:27.105468 | orchestrator | 2026-01-30 03:19:27.105475 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-30 03:19:27.105485 | orchestrator | Friday 30 January 2026 03:19:24 +0000 (0:00:01.103) 0:00:04.506 ******** 2026-01-30 03:19:27.105494 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:27.105563 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:27.105567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:27.105575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262829 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:28.262989 | orchestrator | 2026-01-30 03:19:28.263002 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-30 03:19:28.263014 | orchestrator | Friday 30 January 2026 03:19:27 +0000 (0:00:03.197) 0:00:07.703 ******** 2026-01-30 03:19:28.263029 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.263040 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.263052 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.263089 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:19:28.263105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.263145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.764976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765103 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:19:28.765171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.765188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765212 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:19:28.765224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.765248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765272 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:19:28.765303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.765324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765377 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:19:28.765391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.765402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:28.765425 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:19:28.765437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:28.765456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.538829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.538941 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:19:29.538965 | orchestrator | 2026-01-30 03:19:29.538982 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-30 03:19:29.539001 | orchestrator | Friday 30 January 2026 03:19:28 +0000 (0:00:00.760) 0:00:08.464 ******** 2026-01-30 03:19:29.539019 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:29.539040 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539052 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539063 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:19:29.539093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:29.539109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539169 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:19:29.539226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:29.539244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539278 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:19:29.539295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:29.539312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:29.539390 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:19:29.539403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:29.539450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246542 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:19:34.246557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:34.246567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246585 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:19:34.246593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 03:19:34.246621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:34.246637 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:19:34.246645 | orchestrator | 2026-01-30 03:19:34.246654 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-30 03:19:34.246663 | orchestrator | Friday 30 January 2026 03:19:30 +0000 (0:00:01.650) 0:00:10.115 ******** 2026-01-30 03:19:34.246671 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:19:34.246678 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:19:34.246686 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:19:34.246694 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:19:34.246717 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:19:34.246726 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:19:34.246733 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:19:34.246741 | orchestrator | 2026-01-30 03:19:34.246749 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-30 03:19:34.246756 | orchestrator | Friday 30 January 2026 03:19:31 +0000 (0:00:00.622) 0:00:10.737 ******** 2026-01-30 03:19:34.246764 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:19:34.246772 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:19:34.246779 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:19:34.246787 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:19:34.246795 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:19:34.246802 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:19:34.246810 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:19:34.246817 | orchestrator | 2026-01-30 03:19:34.246825 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-30 03:19:34.246832 | orchestrator | Friday 30 January 2026 03:19:31 +0000 (0:00:00.741) 0:00:11.479 ******** 2026-01-30 03:19:34.246841 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:34.246924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:36.946360 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946567 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:36.946756 | orchestrator | 2026-01-30 03:19:36.946776 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-30 03:19:36.946797 | orchestrator | Friday 30 January 2026 03:19:35 +0000 (0:00:03.402) 0:00:14.881 ******** 2026-01-30 03:19:36.946815 | orchestrator | [WARNING]: Skipped 2026-01-30 03:19:36.946837 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-30 03:19:36.946860 | orchestrator | to this access issue: 2026-01-30 03:19:36.946881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-30 03:19:36.946901 | orchestrator | directory 2026-01-30 03:19:36.946922 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:19:36.946943 | orchestrator | 2026-01-30 03:19:36.946963 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-30 03:19:36.946983 | orchestrator | Friday 30 January 2026 03:19:36 +0000 (0:00:00.902) 0:00:15.784 ******** 2026-01-30 03:19:36.947003 | orchestrator | [WARNING]: Skipped 2026-01-30 03:19:36.947035 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-30 03:19:46.231709 | orchestrator | to this access issue: 2026-01-30 03:19:46.231818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-30 03:19:46.231832 | orchestrator | directory 2026-01-30 03:19:46.231844 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:19:46.231855 | orchestrator | 2026-01-30 03:19:46.231866 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-30 03:19:46.231877 | orchestrator | Friday 30 January 2026 03:19:37 +0000 (0:00:01.109) 0:00:16.894 ******** 2026-01-30 03:19:46.231910 | orchestrator | [WARNING]: Skipped 2026-01-30 03:19:46.231920 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-30 03:19:46.231929 | orchestrator | to this access issue: 2026-01-30 03:19:46.231939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-30 03:19:46.231949 | orchestrator | directory 2026-01-30 03:19:46.231959 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:19:46.231968 | orchestrator | 2026-01-30 03:19:46.231977 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-30 03:19:46.231987 | orchestrator | Friday 30 January 2026 03:19:37 +0000 (0:00:00.787) 0:00:17.681 ******** 2026-01-30 03:19:46.231995 | orchestrator | [WARNING]: Skipped 2026-01-30 03:19:46.232004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-30 03:19:46.232013 | orchestrator | to this access issue: 2026-01-30 03:19:46.232022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-30 03:19:46.232030 | orchestrator | directory 2026-01-30 03:19:46.232038 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 03:19:46.232046 | orchestrator | 2026-01-30 03:19:46.232055 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-30 03:19:46.232064 | orchestrator | Friday 30 January 2026 03:19:38 +0000 (0:00:00.770) 0:00:18.451 ******** 2026-01-30 03:19:46.232074 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:46.232083 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:19:46.232091 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:19:46.232099 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:19:46.232107 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:19:46.232116 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:19:46.232142 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:19:46.232152 | orchestrator | 2026-01-30 03:19:46.232160 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-30 03:19:46.232168 | orchestrator | Friday 30 January 2026 03:19:41 +0000 (0:00:02.372) 0:00:20.823 ******** 2026-01-30 03:19:46.232177 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232188 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232198 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232206 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232216 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232225 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232240 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 03:19:46.232250 | orchestrator | 2026-01-30 03:19:46.232261 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-30 03:19:46.232272 | orchestrator | Friday 30 January 2026 03:19:43 +0000 (0:00:01.955) 0:00:22.779 ******** 2026-01-30 03:19:46.232325 | orchestrator | changed: [testbed-manager] 2026-01-30 03:19:46.232337 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:19:46.232347 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:19:46.232357 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:19:46.232368 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:19:46.232378 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:19:46.232388 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:19:46.232399 | orchestrator | 2026-01-30 03:19:46.232409 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-30 03:19:46.232429 | orchestrator | Friday 30 January 2026 03:19:44 +0000 (0:00:01.797) 0:00:24.577 ******** 2026-01-30 03:19:46.232443 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:46.232475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:46.232486 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:46.232496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:46.232506 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:46.232522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:46.232541 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:46.232560 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:46.232569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:46.232585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:51.753683 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:51.753793 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753812 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:51.753842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:51.753881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753893 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:51.753906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:19:51.753946 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753960 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753971 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753983 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:51.753996 | orchestrator | 2026-01-30 03:19:51.754009 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-30 03:19:51.754096 | orchestrator | Friday 30 January 2026 03:19:46 +0000 (0:00:01.523) 0:00:26.100 ******** 2026-01-30 03:19:51.754108 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754140 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754152 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754162 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754173 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754184 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 03:19:51.754195 | orchestrator | 2026-01-30 03:19:51.754206 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-30 03:19:51.754219 | orchestrator | Friday 30 January 2026 03:19:48 +0000 (0:00:01.802) 0:00:27.902 ******** 2026-01-30 03:19:51.754232 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754259 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754312 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754326 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754338 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754351 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 03:19:51.754363 | orchestrator | 2026-01-30 03:19:51.754376 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-30 03:19:51.754389 | orchestrator | Friday 30 January 2026 03:19:49 +0000 (0:00:01.630) 0:00:29.533 ******** 2026-01-30 03:19:51.754402 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:51.754427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.382802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.382908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.382948 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.382977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.382989 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 03:19:52.383001 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383116 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:19:52.383150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:21:16.818616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:21:16.818789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:21:16.818819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:21:16.818859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:21:16.818880 | orchestrator | 2026-01-30 03:21:16.818902 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-30 03:21:16.818924 | orchestrator | Friday 30 January 2026 03:19:52 +0000 (0:00:02.549) 0:00:32.082 ******** 2026-01-30 03:21:16.818946 | orchestrator | changed: [testbed-manager] 2026-01-30 03:21:16.818969 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:16.819025 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:16.819036 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:16.819047 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:21:16.819058 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:21:16.819069 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:21:16.819080 | orchestrator | 2026-01-30 03:21:16.819091 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-30 03:21:16.819102 | orchestrator | Friday 30 January 2026 03:19:53 +0000 (0:00:01.303) 0:00:33.386 ******** 2026-01-30 03:21:16.819113 | orchestrator | changed: [testbed-manager] 2026-01-30 03:21:16.819129 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:16.819147 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:16.819166 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:16.819184 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:21:16.819203 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:21:16.819221 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:21:16.819239 | orchestrator | 2026-01-30 03:21:16.819258 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819275 | orchestrator | Friday 30 January 2026 03:19:54 +0000 (0:00:01.013) 0:00:34.399 ******** 2026-01-30 03:21:16.819293 | orchestrator | 2026-01-30 03:21:16.819312 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819333 | orchestrator | Friday 30 January 2026 03:19:54 +0000 (0:00:00.060) 0:00:34.460 ******** 2026-01-30 03:21:16.819353 | orchestrator | 2026-01-30 03:21:16.819372 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819393 | orchestrator | Friday 30 January 2026 03:19:54 +0000 (0:00:00.062) 0:00:34.523 ******** 2026-01-30 03:21:16.819412 | orchestrator | 2026-01-30 03:21:16.819431 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819444 | orchestrator | Friday 30 January 2026 03:19:54 +0000 (0:00:00.060) 0:00:34.584 ******** 2026-01-30 03:21:16.819456 | orchestrator | 2026-01-30 03:21:16.819468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819497 | orchestrator | Friday 30 January 2026 03:19:55 +0000 (0:00:00.203) 0:00:34.787 ******** 2026-01-30 03:21:16.819508 | orchestrator | 2026-01-30 03:21:16.819519 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819529 | orchestrator | Friday 30 January 2026 03:19:55 +0000 (0:00:00.058) 0:00:34.846 ******** 2026-01-30 03:21:16.819540 | orchestrator | 2026-01-30 03:21:16.819552 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 03:21:16.819563 | orchestrator | Friday 30 January 2026 03:19:55 +0000 (0:00:00.061) 0:00:34.907 ******** 2026-01-30 03:21:16.819574 | orchestrator | 2026-01-30 03:21:16.819585 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-30 03:21:16.819596 | orchestrator | Friday 30 January 2026 03:19:55 +0000 (0:00:00.083) 0:00:34.991 ******** 2026-01-30 03:21:16.819607 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:16.819617 | orchestrator | changed: [testbed-manager] 2026-01-30 03:21:16.819629 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:21:16.819640 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:16.819650 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:21:16.819684 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:16.819704 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:21:16.819720 | orchestrator | 2026-01-30 03:21:16.819737 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-30 03:21:16.819754 | orchestrator | Friday 30 January 2026 03:20:33 +0000 (0:00:38.219) 0:01:13.211 ******** 2026-01-30 03:21:16.819773 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:16.819792 | orchestrator | changed: [testbed-manager] 2026-01-30 03:21:16.819810 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:16.819823 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:21:16.819834 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:16.819845 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:21:16.819856 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:21:16.819866 | orchestrator | 2026-01-30 03:21:16.819878 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-30 03:21:16.819889 | orchestrator | Friday 30 January 2026 03:21:07 +0000 (0:00:34.419) 0:01:47.631 ******** 2026-01-30 03:21:16.819899 | orchestrator | ok: [testbed-manager] 2026-01-30 03:21:16.819912 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:21:16.819922 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:21:16.819933 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:21:16.819944 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:21:16.819955 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:21:16.819966 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:21:16.820009 | orchestrator | 2026-01-30 03:21:16.820020 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-30 03:21:16.820031 | orchestrator | Friday 30 January 2026 03:21:09 +0000 (0:00:01.676) 0:01:49.307 ******** 2026-01-30 03:21:16.820042 | orchestrator | changed: [testbed-manager] 2026-01-30 03:21:16.820053 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:16.820064 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:16.820075 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:21:16.820086 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:21:16.820097 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:21:16.820108 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:16.820118 | orchestrator | 2026-01-30 03:21:16.820205 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:21:16.820219 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820232 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820256 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820280 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820292 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820303 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820314 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 03:21:16.820324 | orchestrator | 2026-01-30 03:21:16.820336 | orchestrator | 2026-01-30 03:21:16.820347 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:21:16.820358 | orchestrator | Friday 30 January 2026 03:21:16 +0000 (0:00:07.181) 0:01:56.489 ******** 2026-01-30 03:21:16.820369 | orchestrator | =============================================================================== 2026-01-30 03:21:16.820380 | orchestrator | common : Restart fluentd container ------------------------------------- 38.22s 2026-01-30 03:21:16.820391 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.42s 2026-01-30 03:21:16.820401 | orchestrator | common : Restart cron container ----------------------------------------- 7.18s 2026-01-30 03:21:16.820412 | orchestrator | common : Copying over config.json files for services -------------------- 3.40s 2026-01-30 03:21:16.820423 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.20s 2026-01-30 03:21:16.820434 | orchestrator | common : Check common containers ---------------------------------------- 2.55s 2026-01-30 03:21:16.820444 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.37s 2026-01-30 03:21:16.820463 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.29s 2026-01-30 03:21:16.820482 | orchestrator | common : Copying over cron logrotate config file ------------------------ 1.96s 2026-01-30 03:21:16.820500 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.80s 2026-01-30 03:21:16.820518 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.80s 2026-01-30 03:21:16.820536 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.68s 2026-01-30 03:21:16.820554 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.65s 2026-01-30 03:21:16.820570 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.63s 2026-01-30 03:21:16.820587 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.52s 2026-01-30 03:21:16.820606 | orchestrator | common : Creating log volume -------------------------------------------- 1.30s 2026-01-30 03:21:16.820641 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.11s 2026-01-30 03:21:17.156704 | orchestrator | common : include_tasks -------------------------------------------------- 1.10s 2026-01-30 03:21:17.156799 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.01s 2026-01-30 03:21:17.156813 | orchestrator | common : include_tasks -------------------------------------------------- 0.91s 2026-01-30 03:21:19.369245 | orchestrator | 2026-01-30 03:21:19 | INFO  | Task 2f8557ca-a4bd-46e8-be90-5f084ba03549 (loadbalancer) was prepared for execution. 2026-01-30 03:21:19.369348 | orchestrator | 2026-01-30 03:21:19 | INFO  | It takes a moment until task 2f8557ca-a4bd-46e8-be90-5f084ba03549 (loadbalancer) has been started and output is visible here. 2026-01-30 03:21:32.648094 | orchestrator | 2026-01-30 03:21:32.648199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:21:32.648212 | orchestrator | 2026-01-30 03:21:32.648221 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:21:32.648230 | orchestrator | Friday 30 January 2026 03:21:23 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-01-30 03:21:32.648257 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:21:32.648267 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:21:32.648275 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:21:32.648283 | orchestrator | 2026-01-30 03:21:32.648292 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:21:32.648300 | orchestrator | Friday 30 January 2026 03:21:23 +0000 (0:00:00.281) 0:00:00.515 ******** 2026-01-30 03:21:32.648309 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-30 03:21:32.648317 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-30 03:21:32.648325 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-30 03:21:32.648332 | orchestrator | 2026-01-30 03:21:32.648340 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-30 03:21:32.648348 | orchestrator | 2026-01-30 03:21:32.648356 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-30 03:21:32.648377 | orchestrator | Friday 30 January 2026 03:21:24 +0000 (0:00:00.376) 0:00:00.892 ******** 2026-01-30 03:21:32.648385 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:21:32.648394 | orchestrator | 2026-01-30 03:21:32.648402 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-30 03:21:32.648410 | orchestrator | Friday 30 January 2026 03:21:24 +0000 (0:00:00.488) 0:00:01.380 ******** 2026-01-30 03:21:32.648417 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:21:32.648425 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:21:32.648433 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:21:32.648441 | orchestrator | 2026-01-30 03:21:32.648449 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-30 03:21:32.648457 | orchestrator | Friday 30 January 2026 03:21:25 +0000 (0:00:00.592) 0:00:01.973 ******** 2026-01-30 03:21:32.648465 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:21:32.648473 | orchestrator | 2026-01-30 03:21:32.648481 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-30 03:21:32.648489 | orchestrator | Friday 30 January 2026 03:21:25 +0000 (0:00:00.617) 0:00:02.590 ******** 2026-01-30 03:21:32.648496 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:21:32.648504 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:21:32.648512 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:21:32.648520 | orchestrator | 2026-01-30 03:21:32.648528 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-30 03:21:32.648536 | orchestrator | Friday 30 January 2026 03:21:26 +0000 (0:00:00.583) 0:00:03.174 ******** 2026-01-30 03:21:32.648544 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648554 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648563 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648572 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648581 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648590 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 03:21:32.648598 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 03:21:32.648608 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 03:21:32.648618 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 03:21:32.648627 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 03:21:32.648642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 03:21:32.648651 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 03:21:32.648660 | orchestrator | 2026-01-30 03:21:32.648669 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-30 03:21:32.648678 | orchestrator | Friday 30 January 2026 03:21:28 +0000 (0:00:02.121) 0:00:05.296 ******** 2026-01-30 03:21:32.648687 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-30 03:21:32.648697 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-30 03:21:32.648706 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-30 03:21:32.648715 | orchestrator | 2026-01-30 03:21:32.648724 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-30 03:21:32.648734 | orchestrator | Friday 30 January 2026 03:21:29 +0000 (0:00:00.698) 0:00:05.994 ******** 2026-01-30 03:21:32.648743 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-30 03:21:32.648755 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-30 03:21:32.648768 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-30 03:21:32.648781 | orchestrator | 2026-01-30 03:21:32.648795 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-30 03:21:32.648808 | orchestrator | Friday 30 January 2026 03:21:30 +0000 (0:00:01.227) 0:00:07.222 ******** 2026-01-30 03:21:32.648821 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-30 03:21:32.648834 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:21:32.648867 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-30 03:21:32.648882 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:21:32.648894 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-30 03:21:32.648908 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:21:32.648920 | orchestrator | 2026-01-30 03:21:32.648962 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-30 03:21:32.648976 | orchestrator | Friday 30 January 2026 03:21:30 +0000 (0:00:00.490) 0:00:07.713 ******** 2026-01-30 03:21:32.649000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:32.649022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:32.649036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:32.649060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:32.649075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:32.649110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:37.573708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:37.573826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:37.573843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:37.573856 | orchestrator | 2026-01-30 03:21:37.573869 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-30 03:21:37.573882 | orchestrator | Friday 30 January 2026 03:21:32 +0000 (0:00:01.777) 0:00:09.490 ******** 2026-01-30 03:21:37.573894 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:37.573993 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:37.574007 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:37.574081 | orchestrator | 2026-01-30 03:21:37.574096 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-30 03:21:37.574107 | orchestrator | Friday 30 January 2026 03:21:33 +0000 (0:00:00.836) 0:00:10.326 ******** 2026-01-30 03:21:37.574118 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-30 03:21:37.574129 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-30 03:21:37.574140 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-30 03:21:37.574151 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-30 03:21:37.574162 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-30 03:21:37.574173 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-30 03:21:37.574184 | orchestrator | 2026-01-30 03:21:37.574195 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-30 03:21:37.574209 | orchestrator | Friday 30 January 2026 03:21:34 +0000 (0:00:01.479) 0:00:11.806 ******** 2026-01-30 03:21:37.574222 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:21:37.574234 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:21:37.574246 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:21:37.574259 | orchestrator | 2026-01-30 03:21:37.574271 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-30 03:21:37.574284 | orchestrator | Friday 30 January 2026 03:21:35 +0000 (0:00:00.827) 0:00:12.633 ******** 2026-01-30 03:21:37.574296 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:21:37.574309 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:21:37.574320 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:21:37.574332 | orchestrator | 2026-01-30 03:21:37.574345 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-30 03:21:37.574358 | orchestrator | Friday 30 January 2026 03:21:37 +0000 (0:00:01.247) 0:00:13.881 ******** 2026-01-30 03:21:37.574371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:21:37.574406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:37.574420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:37.574435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:37.574455 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:21:37.574468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:21:37.574517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:37.574530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:37.574541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:37.574553 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:21:37.574572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:21:40.306379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:40.306495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:40.306510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:40.306521 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:21:40.306533 | orchestrator | 2026-01-30 03:21:40.306544 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-30 03:21:40.306554 | orchestrator | Friday 30 January 2026 03:21:37 +0000 (0:00:00.540) 0:00:14.422 ******** 2026-01-30 03:21:40.306564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:40.306574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:40.306583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:40.306631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:40.306642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:40.306651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:40.306660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:40.306670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:40.306679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:40.306717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:48.529344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288', '__omit_place_holder__162ace946491c584804e9cd174b450a5ed5a0288'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 03:21:48.529391 | orchestrator | 2026-01-30 03:21:48.529415 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-30 03:21:48.529436 | orchestrator | Friday 30 January 2026 03:21:40 +0000 (0:00:02.730) 0:00:17.152 ******** 2026-01-30 03:21:48.529456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:48.529658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:48.529679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:48.529698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:48.529718 | orchestrator | 2026-01-30 03:21:48.529737 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-30 03:21:48.529756 | orchestrator | Friday 30 January 2026 03:21:43 +0000 (0:00:03.186) 0:00:20.339 ******** 2026-01-30 03:21:48.529788 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 03:21:48.529808 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 03:21:48.529827 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 03:21:48.529846 | orchestrator | 2026-01-30 03:21:48.529864 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-30 03:21:48.529911 | orchestrator | Friday 30 January 2026 03:21:45 +0000 (0:00:01.899) 0:00:22.238 ******** 2026-01-30 03:21:48.529933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 03:21:48.529952 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 03:21:48.529970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 03:21:48.529981 | orchestrator | 2026-01-30 03:21:48.529992 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-30 03:21:48.530003 | orchestrator | Friday 30 January 2026 03:21:47 +0000 (0:00:02.608) 0:00:24.847 ******** 2026-01-30 03:21:48.530014 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:21:48.530136 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:21:48.530149 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:21:48.530161 | orchestrator | 2026-01-30 03:21:48.530186 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-30 03:21:59.252538 | orchestrator | Friday 30 January 2026 03:21:48 +0000 (0:00:00.536) 0:00:25.384 ******** 2026-01-30 03:21:59.252704 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 03:21:59.252763 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 03:21:59.252787 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 03:21:59.252807 | orchestrator | 2026-01-30 03:21:59.252827 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-30 03:21:59.252877 | orchestrator | Friday 30 January 2026 03:21:50 +0000 (0:00:01.929) 0:00:27.313 ******** 2026-01-30 03:21:59.252900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 03:21:59.252919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 03:21:59.252938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 03:21:59.252957 | orchestrator | 2026-01-30 03:21:59.252976 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-30 03:21:59.252995 | orchestrator | Friday 30 January 2026 03:21:52 +0000 (0:00:01.936) 0:00:29.249 ******** 2026-01-30 03:21:59.253015 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-30 03:21:59.253035 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-30 03:21:59.253055 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-30 03:21:59.253075 | orchestrator | 2026-01-30 03:21:59.253109 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-30 03:21:59.253130 | orchestrator | Friday 30 January 2026 03:21:53 +0000 (0:00:01.356) 0:00:30.605 ******** 2026-01-30 03:21:59.253150 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-30 03:21:59.253170 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-30 03:21:59.253188 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-30 03:21:59.253208 | orchestrator | 2026-01-30 03:21:59.253255 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-30 03:21:59.253276 | orchestrator | Friday 30 January 2026 03:21:55 +0000 (0:00:01.322) 0:00:31.928 ******** 2026-01-30 03:21:59.253295 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:21:59.253313 | orchestrator | 2026-01-30 03:21:59.253333 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-30 03:21:59.253352 | orchestrator | Friday 30 January 2026 03:21:55 +0000 (0:00:00.489) 0:00:32.417 ******** 2026-01-30 03:21:59.253375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:21:59.253554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:59.253575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:59.253594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:21:59.253614 | orchestrator | 2026-01-30 03:21:59.253632 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-30 03:21:59.253651 | orchestrator | Friday 30 January 2026 03:21:58 +0000 (0:00:03.143) 0:00:35.561 ******** 2026-01-30 03:21:59.253690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:21:59.973522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:59.973631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:59.973672 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:21:59.973688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:21:59.973701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:59.973713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:59.973724 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:21:59.973736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:21:59.973783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:59.973797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:59.973817 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:21:59.973829 | orchestrator | 2026-01-30 03:21:59.973841 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-30 03:21:59.973905 | orchestrator | Friday 30 January 2026 03:21:59 +0000 (0:00:00.540) 0:00:36.102 ******** 2026-01-30 03:21:59.973919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:21:59.973931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:21:59.973943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:21:59.973954 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:21:59.973965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:21:59.973991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:00.728899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:00.729027 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:00.729046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:00.729060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:00.729072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:00.729083 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:00.729095 | orchestrator | 2026-01-30 03:22:00.729107 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-30 03:22:00.729120 | orchestrator | Friday 30 January 2026 03:21:59 +0000 (0:00:00.719) 0:00:36.822 ******** 2026-01-30 03:22:00.729131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:00.729143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:00.729174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:00.729195 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:00.729215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:00.729234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:00.729253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:00.729271 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:00.729289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:00.729329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:00.729357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:00.729402 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:02.055751 | orchestrator | 2026-01-30 03:22:02.055868 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-30 03:22:02.055885 | orchestrator | Friday 30 January 2026 03:22:00 +0000 (0:00:00.757) 0:00:37.579 ******** 2026-01-30 03:22:02.055900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:02.055914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:02.055926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:02.055937 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:02.055950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:02.055961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:02.055995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:02.056025 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:02.056056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:02.056068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:02.056078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:02.056088 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:02.056099 | orchestrator | 2026-01-30 03:22:02.056109 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-30 03:22:02.056119 | orchestrator | Friday 30 January 2026 03:22:01 +0000 (0:00:00.566) 0:00:38.146 ******** 2026-01-30 03:22:02.056130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:02.056140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:02.056168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:02.056179 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:02.056198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:03.001610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:03.001713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:03.001729 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:03.001744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:03.001757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:03.001769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:03.001805 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:03.001817 | orchestrator | 2026-01-30 03:22:03.001830 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-30 03:22:03.001894 | orchestrator | Friday 30 January 2026 03:22:02 +0000 (0:00:00.759) 0:00:38.905 ******** 2026-01-30 03:22:03.001934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:03.001984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:03.002007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:03.002091 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:03.002108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:03.002120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:03.002143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:03.002157 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:03.002177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:03.002201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:04.298535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:04.298662 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:04.298681 | orchestrator | 2026-01-30 03:22:04.298695 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-30 03:22:04.298708 | orchestrator | Friday 30 January 2026 03:22:02 +0000 (0:00:00.940) 0:00:39.846 ******** 2026-01-30 03:22:04.298740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:04.298803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:04.298898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:04.298922 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:04.298943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:04.298982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:04.299029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:04.299052 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:04.299072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:04.299088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:04.299111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:04.299124 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:04.299138 | orchestrator | 2026-01-30 03:22:04.299152 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-30 03:22:04.299165 | orchestrator | Friday 30 January 2026 03:22:03 +0000 (0:00:00.562) 0:00:40.409 ******** 2026-01-30 03:22:04.299178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 03:22:04.299192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:04.299224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:10.241708 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:10.241796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 03:22:10.241806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:10.241868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:10.241874 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:10.241879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 03:22:10.241894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 03:22:10.241898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 03:22:10.241902 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:10.241906 | orchestrator | 2026-01-30 03:22:10.241911 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-30 03:22:10.241917 | orchestrator | Friday 30 January 2026 03:22:04 +0000 (0:00:00.740) 0:00:41.150 ******** 2026-01-30 03:22:10.241922 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 03:22:10.241944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 03:22:10.241951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 03:22:10.241957 | orchestrator | 2026-01-30 03:22:10.241964 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-30 03:22:10.241970 | orchestrator | Friday 30 January 2026 03:22:05 +0000 (0:00:01.367) 0:00:42.518 ******** 2026-01-30 03:22:10.241975 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 03:22:10.241979 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 03:22:10.241983 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 03:22:10.241987 | orchestrator | 2026-01-30 03:22:10.241997 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-30 03:22:10.242000 | orchestrator | Friday 30 January 2026 03:22:07 +0000 (0:00:01.542) 0:00:44.061 ******** 2026-01-30 03:22:10.242004 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 03:22:10.242008 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 03:22:10.242012 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 03:22:10.242055 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 03:22:10.242060 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:10.242064 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 03:22:10.242068 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:10.242071 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 03:22:10.242075 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:10.242079 | orchestrator | 2026-01-30 03:22:10.242083 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-30 03:22:10.242087 | orchestrator | Friday 30 January 2026 03:22:07 +0000 (0:00:00.755) 0:00:44.816 ******** 2026-01-30 03:22:10.242091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 03:22:10.242096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 03:22:10.242103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 03:22:10.242114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:22:14.002744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:22:14.002905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 03:22:14.002928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:22:14.002944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:22:14.002957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 03:22:14.002971 | orchestrator | 2026-01-30 03:22:14.003006 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-30 03:22:14.003023 | orchestrator | Friday 30 January 2026 03:22:10 +0000 (0:00:02.276) 0:00:47.093 ******** 2026-01-30 03:22:14.003039 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:14.003053 | orchestrator | 2026-01-30 03:22:14.003067 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-30 03:22:14.003082 | orchestrator | Friday 30 January 2026 03:22:10 +0000 (0:00:00.720) 0:00:47.814 ******** 2026-01-30 03:22:14.003129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 03:22:14.003172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:14.003188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.003203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.003218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 03:22:14.003239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:14.003254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.003288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.598874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 03:22:14.598981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:14.598996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599038 | orchestrator | 2026-01-30 03:22:14.599051 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-30 03:22:14.599064 | orchestrator | Friday 30 January 2026 03:22:13 +0000 (0:00:03.038) 0:00:50.852 ******** 2026-01-30 03:22:14.599077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 03:22:14.599131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:14.599160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599194 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:14.599208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 03:22:14.599225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:14.599249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:14.599289 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:14.599321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 03:22:22.541670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 03:22:22.541834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:22.541855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 03:22:22.541889 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:22.541902 | orchestrator | 2026-01-30 03:22:22.541913 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-30 03:22:22.541925 | orchestrator | Friday 30 January 2026 03:22:14 +0000 (0:00:00.599) 0:00:51.451 ******** 2026-01-30 03:22:22.541936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.541948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.541959 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:22.541987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.541998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.542007 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:22.542080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.542094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-30 03:22:22.542104 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:22.542114 | orchestrator | 2026-01-30 03:22:22.542124 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-30 03:22:22.542134 | orchestrator | Friday 30 January 2026 03:22:15 +0000 (0:00:01.158) 0:00:52.610 ******** 2026-01-30 03:22:22.542144 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:22.542154 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:22.542164 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:22.542173 | orchestrator | 2026-01-30 03:22:22.542184 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-30 03:22:22.542194 | orchestrator | Friday 30 January 2026 03:22:16 +0000 (0:00:01.204) 0:00:53.814 ******** 2026-01-30 03:22:22.542204 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:22.542216 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:22.542227 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:22.542238 | orchestrator | 2026-01-30 03:22:22.542249 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-30 03:22:22.542261 | orchestrator | Friday 30 January 2026 03:22:18 +0000 (0:00:01.857) 0:00:55.671 ******** 2026-01-30 03:22:22.542272 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:22.542283 | orchestrator | 2026-01-30 03:22:22.542312 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-30 03:22:22.542324 | orchestrator | Friday 30 January 2026 03:22:19 +0000 (0:00:00.590) 0:00:56.262 ******** 2026-01-30 03:22:22.542337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:22.542366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:22.542379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:22.542392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:22.542404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:22.542424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:23.122604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122616 | orchestrator | 2026-01-30 03:22:23.122622 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-30 03:22:23.122627 | orchestrator | Friday 30 January 2026 03:22:22 +0000 (0:00:03.130) 0:00:59.392 ******** 2026-01-30 03:22:23.122633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 03:22:23.122637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122674 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:23.122682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 03:22:23.122687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:23.122695 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:23.122699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 03:22:23.122711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 03:22:31.857158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:31.857260 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:31.857275 | orchestrator | 2026-01-30 03:22:31.857287 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-30 03:22:31.857297 | orchestrator | Friday 30 January 2026 03:22:23 +0000 (0:00:00.581) 0:00:59.973 ******** 2026-01-30 03:22:31.857322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857346 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:31.857355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857374 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:31.857383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-30 03:22:31.857401 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:31.857409 | orchestrator | 2026-01-30 03:22:31.857419 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-30 03:22:31.857427 | orchestrator | Friday 30 January 2026 03:22:23 +0000 (0:00:00.764) 0:01:00.738 ******** 2026-01-30 03:22:31.857437 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:31.857445 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:31.857454 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:31.857463 | orchestrator | 2026-01-30 03:22:31.857472 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-30 03:22:31.857481 | orchestrator | Friday 30 January 2026 03:22:25 +0000 (0:00:01.414) 0:01:02.152 ******** 2026-01-30 03:22:31.857508 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:31.857517 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:31.857526 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:31.857534 | orchestrator | 2026-01-30 03:22:31.857543 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-30 03:22:31.857555 | orchestrator | Friday 30 January 2026 03:22:27 +0000 (0:00:01.830) 0:01:03.982 ******** 2026-01-30 03:22:31.857570 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:31.857584 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:31.857596 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:31.857610 | orchestrator | 2026-01-30 03:22:31.857624 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-30 03:22:31.857639 | orchestrator | Friday 30 January 2026 03:22:27 +0000 (0:00:00.288) 0:01:04.271 ******** 2026-01-30 03:22:31.857654 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:31.857665 | orchestrator | 2026-01-30 03:22:31.857675 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-30 03:22:31.857685 | orchestrator | Friday 30 January 2026 03:22:28 +0000 (0:00:00.607) 0:01:04.879 ******** 2026-01-30 03:22:31.857726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 03:22:31.857754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 03:22:31.857821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 03:22:31.857839 | orchestrator | 2026-01-30 03:22:31.857854 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-30 03:22:31.857866 | orchestrator | Friday 30 January 2026 03:22:30 +0000 (0:00:02.546) 0:01:07.425 ******** 2026-01-30 03:22:31.857886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 03:22:31.857896 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:31.857906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 03:22:31.857915 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:31.857933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 03:22:39.076174 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:39.076281 | orchestrator | 2026-01-30 03:22:39.076304 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-30 03:22:39.076322 | orchestrator | Friday 30 January 2026 03:22:31 +0000 (0:00:01.285) 0:01:08.711 ******** 2026-01-30 03:22:39.076360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076399 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:39.076415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076499 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:39.076516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 03:22:39.076547 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:39.076561 | orchestrator | 2026-01-30 03:22:39.076576 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-30 03:22:39.076592 | orchestrator | Friday 30 January 2026 03:22:33 +0000 (0:00:01.703) 0:01:10.414 ******** 2026-01-30 03:22:39.076606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:39.076621 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:39.076636 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:39.076652 | orchestrator | 2026-01-30 03:22:39.076670 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-30 03:22:39.076686 | orchestrator | Friday 30 January 2026 03:22:33 +0000 (0:00:00.437) 0:01:10.852 ******** 2026-01-30 03:22:39.076700 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:39.076715 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:39.076731 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:39.076776 | orchestrator | 2026-01-30 03:22:39.076792 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-30 03:22:39.076803 | orchestrator | Friday 30 January 2026 03:22:35 +0000 (0:00:01.173) 0:01:12.025 ******** 2026-01-30 03:22:39.076814 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:39.076824 | orchestrator | 2026-01-30 03:22:39.076834 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-30 03:22:39.076844 | orchestrator | Friday 30 January 2026 03:22:36 +0000 (0:00:00.894) 0:01:12.920 ******** 2026-01-30 03:22:39.076887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:39.076913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.076926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.076937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.076948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:39.076966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 03:22:39.699533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699602 | orchestrator | 2026-01-30 03:22:39.699615 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-30 03:22:39.699628 | orchestrator | Friday 30 January 2026 03:22:39 +0000 (0:00:03.093) 0:01:16.013 ******** 2026-01-30 03:22:39.699640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 03:22:39.699652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:39.699687 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:39.699715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 03:22:45.584619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584761 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:45.584771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 03:22:45.584779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 03:22:45.584845 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:45.584852 | orchestrator | 2026-01-30 03:22:45.584861 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-30 03:22:45.584872 | orchestrator | Friday 30 January 2026 03:22:39 +0000 (0:00:00.635) 0:01:16.649 ******** 2026-01-30 03:22:45.584884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584907 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:45.584917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584934 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:45.584940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-30 03:22:45.584953 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:45.584959 | orchestrator | 2026-01-30 03:22:45.584966 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-30 03:22:45.584972 | orchestrator | Friday 30 January 2026 03:22:41 +0000 (0:00:01.239) 0:01:17.889 ******** 2026-01-30 03:22:45.584978 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:45.584992 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:45.584999 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:45.585009 | orchestrator | 2026-01-30 03:22:45.585020 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-30 03:22:45.585030 | orchestrator | Friday 30 January 2026 03:22:42 +0000 (0:00:01.204) 0:01:19.093 ******** 2026-01-30 03:22:45.585037 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:45.585044 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:45.585050 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:45.585056 | orchestrator | 2026-01-30 03:22:45.585063 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-30 03:22:45.585069 | orchestrator | Friday 30 January 2026 03:22:44 +0000 (0:00:01.864) 0:01:20.958 ******** 2026-01-30 03:22:45.585075 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:45.585081 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:45.585087 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:45.585094 | orchestrator | 2026-01-30 03:22:45.585100 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-30 03:22:45.585106 | orchestrator | Friday 30 January 2026 03:22:44 +0000 (0:00:00.282) 0:01:21.240 ******** 2026-01-30 03:22:45.585112 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:45.585119 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:45.585125 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:45.585134 | orchestrator | 2026-01-30 03:22:45.585144 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-30 03:22:45.585156 | orchestrator | Friday 30 January 2026 03:22:44 +0000 (0:00:00.288) 0:01:21.529 ******** 2026-01-30 03:22:45.585167 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:45.585176 | orchestrator | 2026-01-30 03:22:45.585184 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-30 03:22:45.585195 | orchestrator | Friday 30 January 2026 03:22:45 +0000 (0:00:00.906) 0:01:22.436 ******** 2026-01-30 03:22:48.659363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 03:22:48.659502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:48.659518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 03:22:48.659554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:48.659566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:48.659608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:48.659621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 03:22:48.659631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:48.659641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:48.659660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:48.659670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:48.659692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389598 | orchestrator | 2026-01-30 03:22:49.389614 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-30 03:22:49.389628 | orchestrator | Friday 30 January 2026 03:22:48 +0000 (0:00:03.235) 0:01:25.671 ******** 2026-01-30 03:22:49.389642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 03:22:49.389657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:49.389672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.389693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.793558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.793687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.793806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.793825 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:49.793841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 03:22:49.793855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:49.794469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.794521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.794534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.794562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.794579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:49.794591 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:49.794603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 03:22:49.794616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 03:22:49.794636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 03:22:58.912285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 03:22:58.912403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 03:22:58.912438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:22:58.912452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 03:22:58.912465 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:58.912479 | orchestrator | 2026-01-30 03:22:58.912492 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-30 03:22:58.912505 | orchestrator | Friday 30 January 2026 03:22:49 +0000 (0:00:00.971) 0:01:26.643 ******** 2026-01-30 03:22:58.912517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912543 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:58.912555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912577 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:58.912589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-30 03:22:58.912631 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:58.912642 | orchestrator | 2026-01-30 03:22:58.912653 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-30 03:22:58.912684 | orchestrator | Friday 30 January 2026 03:22:50 +0000 (0:00:01.173) 0:01:27.816 ******** 2026-01-30 03:22:58.912729 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:58.912742 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:58.912753 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:58.912764 | orchestrator | 2026-01-30 03:22:58.912775 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-30 03:22:58.912787 | orchestrator | Friday 30 January 2026 03:22:52 +0000 (0:00:01.219) 0:01:29.036 ******** 2026-01-30 03:22:58.912800 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:22:58.912813 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:22:58.912825 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:22:58.912838 | orchestrator | 2026-01-30 03:22:58.912850 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-30 03:22:58.912863 | orchestrator | Friday 30 January 2026 03:22:54 +0000 (0:00:01.888) 0:01:30.925 ******** 2026-01-30 03:22:58.912875 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:22:58.912888 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:22:58.912900 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:22:58.912912 | orchestrator | 2026-01-30 03:22:58.912925 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-30 03:22:58.912938 | orchestrator | Friday 30 January 2026 03:22:54 +0000 (0:00:00.301) 0:01:31.227 ******** 2026-01-30 03:22:58.912951 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:22:58.912963 | orchestrator | 2026-01-30 03:22:58.912976 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-30 03:22:58.912988 | orchestrator | Friday 30 January 2026 03:22:55 +0000 (0:00:00.958) 0:01:32.185 ******** 2026-01-30 03:22:58.913012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 03:22:58.913040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:01.595447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 03:23:01.595551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:01.595621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 03:23:01.595638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:01.595659 | orchestrator | 2026-01-30 03:23:01.595673 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-30 03:23:01.595758 | orchestrator | Friday 30 January 2026 03:22:59 +0000 (0:00:03.686) 0:01:35.872 ******** 2026-01-30 03:23:01.595805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 03:23:01.720047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 03:23:01.720214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:01.720244 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:01.720303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:01.720338 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:01.720359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 03:23:01.720398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 03:23:11.981376 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:11.981518 | orchestrator | 2026-01-30 03:23:11.981537 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-30 03:23:11.981552 | orchestrator | Friday 30 January 2026 03:23:01 +0000 (0:00:02.703) 0:01:38.576 ******** 2026-01-30 03:23:11.981566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981596 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:11.981608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981632 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:11.981644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 03:23:11.981739 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:11.981750 | orchestrator | 2026-01-30 03:23:11.981762 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-30 03:23:11.981774 | orchestrator | Friday 30 January 2026 03:23:04 +0000 (0:00:02.917) 0:01:41.494 ******** 2026-01-30 03:23:11.981809 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:11.981821 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:11.981832 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:11.981843 | orchestrator | 2026-01-30 03:23:11.981854 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-30 03:23:11.981865 | orchestrator | Friday 30 January 2026 03:23:05 +0000 (0:00:01.202) 0:01:42.696 ******** 2026-01-30 03:23:11.981876 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:11.981889 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:11.981902 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:11.981915 | orchestrator | 2026-01-30 03:23:11.981927 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-30 03:23:11.981959 | orchestrator | Friday 30 January 2026 03:23:07 +0000 (0:00:01.768) 0:01:44.465 ******** 2026-01-30 03:23:11.981972 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:11.981985 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:11.981997 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:11.982010 | orchestrator | 2026-01-30 03:23:11.982092 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-30 03:23:11.982104 | orchestrator | Friday 30 January 2026 03:23:07 +0000 (0:00:00.279) 0:01:44.744 ******** 2026-01-30 03:23:11.982115 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:11.982126 | orchestrator | 2026-01-30 03:23:11.982137 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-30 03:23:11.982148 | orchestrator | Friday 30 January 2026 03:23:08 +0000 (0:00:00.978) 0:01:45.723 ******** 2026-01-30 03:23:11.982161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 03:23:11.982177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 03:23:11.982217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 03:23:11.982238 | orchestrator | 2026-01-30 03:23:11.982256 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-30 03:23:11.982305 | orchestrator | Friday 30 January 2026 03:23:11 +0000 (0:00:02.767) 0:01:48.490 ******** 2026-01-30 03:23:11.982326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 03:23:11.982345 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:11.982377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 03:23:20.214268 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:20.214379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 03:23:20.214474 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:20.214495 | orchestrator | 2026-01-30 03:23:20.214508 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-30 03:23:20.214521 | orchestrator | Friday 30 January 2026 03:23:11 +0000 (0:00:00.345) 0:01:48.836 ******** 2026-01-30 03:23:20.214534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214560 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:20.214572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214593 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:20.214605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-30 03:23:20.214797 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:20.214816 | orchestrator | 2026-01-30 03:23:20.214830 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-30 03:23:20.214843 | orchestrator | Friday 30 January 2026 03:23:12 +0000 (0:00:00.765) 0:01:49.601 ******** 2026-01-30 03:23:20.214856 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:20.214869 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:20.214881 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:20.214893 | orchestrator | 2026-01-30 03:23:20.214905 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-30 03:23:20.214918 | orchestrator | Friday 30 January 2026 03:23:13 +0000 (0:00:01.250) 0:01:50.852 ******** 2026-01-30 03:23:20.214931 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:20.214943 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:20.214955 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:20.214968 | orchestrator | 2026-01-30 03:23:20.214981 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-30 03:23:20.215000 | orchestrator | Friday 30 January 2026 03:23:15 +0000 (0:00:01.916) 0:01:52.769 ******** 2026-01-30 03:23:20.215013 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:20.215025 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:20.215038 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:20.215049 | orchestrator | 2026-01-30 03:23:20.215062 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-30 03:23:20.215075 | orchestrator | Friday 30 January 2026 03:23:16 +0000 (0:00:00.295) 0:01:53.064 ******** 2026-01-30 03:23:20.215087 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:20.215100 | orchestrator | 2026-01-30 03:23:20.215112 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-30 03:23:20.215125 | orchestrator | Friday 30 January 2026 03:23:17 +0000 (0:00:01.004) 0:01:54.068 ******** 2026-01-30 03:23:20.215166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 03:23:20.215198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 03:23:20.215223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 03:23:21.722524 | orchestrator | 2026-01-30 03:23:21.722670 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-30 03:23:21.722689 | orchestrator | Friday 30 January 2026 03:23:20 +0000 (0:00:03.000) 0:01:57.068 ******** 2026-01-30 03:23:21.722726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 03:23:21.722743 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:21.722780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 03:23:21.722814 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:21.722834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 03:23:21.722847 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:21.722858 | orchestrator | 2026-01-30 03:23:21.722870 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-30 03:23:21.722881 | orchestrator | Friday 30 January 2026 03:23:20 +0000 (0:00:00.609) 0:01:57.677 ******** 2026-01-30 03:23:21.722894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:21.722915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:21.722929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:21.722950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:29.780526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 03:23:29.780704 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:29.780726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:29.780742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:29.780775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:29.780791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:29.780813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 03:23:29.780829 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:29.780849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:29.780867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:29.780887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-30 03:23:29.780936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 03:23:29.780956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 03:23:29.780975 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:29.780994 | orchestrator | 2026-01-30 03:23:29.781014 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-30 03:23:29.781036 | orchestrator | Friday 30 January 2026 03:23:21 +0000 (0:00:00.897) 0:01:58.575 ******** 2026-01-30 03:23:29.781056 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:29.781079 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:29.781101 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:29.781122 | orchestrator | 2026-01-30 03:23:29.781145 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-30 03:23:29.781167 | orchestrator | Friday 30 January 2026 03:23:23 +0000 (0:00:01.508) 0:02:00.084 ******** 2026-01-30 03:23:29.781189 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:29.781203 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:29.781216 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:29.781228 | orchestrator | 2026-01-30 03:23:29.781240 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-30 03:23:29.781253 | orchestrator | Friday 30 January 2026 03:23:25 +0000 (0:00:01.926) 0:02:02.010 ******** 2026-01-30 03:23:29.781266 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:29.781278 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:29.781315 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:29.781328 | orchestrator | 2026-01-30 03:23:29.781340 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-30 03:23:29.781353 | orchestrator | Friday 30 January 2026 03:23:25 +0000 (0:00:00.316) 0:02:02.327 ******** 2026-01-30 03:23:29.781365 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:29.781377 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:29.781393 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:29.781411 | orchestrator | 2026-01-30 03:23:29.781426 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-30 03:23:29.781443 | orchestrator | Friday 30 January 2026 03:23:25 +0000 (0:00:00.293) 0:02:02.621 ******** 2026-01-30 03:23:29.781459 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:29.781476 | orchestrator | 2026-01-30 03:23:29.781493 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-30 03:23:29.781510 | orchestrator | Friday 30 January 2026 03:23:26 +0000 (0:00:01.062) 0:02:03.684 ******** 2026-01-30 03:23:29.781546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:23:29.781587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:23:29.781703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:29.781718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:29.781744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:30.356881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:30.356987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:23:30.357027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:30.357041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:30.357053 | orchestrator | 2026-01-30 03:23:30.357066 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-30 03:23:30.357079 | orchestrator | Friday 30 January 2026 03:23:29 +0000 (0:00:02.947) 0:02:06.631 ******** 2026-01-30 03:23:30.357113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:23:30.357136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:30.357157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:30.357186 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:30.357207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:23:30.357226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:30.357245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:30.357264 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:30.357303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:23:38.751573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:23:38.751734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:23:38.751753 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:38.751766 | orchestrator | 2026-01-30 03:23:38.751779 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-30 03:23:38.751792 | orchestrator | Friday 30 January 2026 03:23:30 +0000 (0:00:00.573) 0:02:07.205 ******** 2026-01-30 03:23:38.751804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751831 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:38.751843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751867 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:38.751878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-30 03:23:38.751901 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:38.751912 | orchestrator | 2026-01-30 03:23:38.751923 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-30 03:23:38.751935 | orchestrator | Friday 30 January 2026 03:23:31 +0000 (0:00:00.940) 0:02:08.145 ******** 2026-01-30 03:23:38.751946 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:38.751957 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:38.751991 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:38.752003 | orchestrator | 2026-01-30 03:23:38.752014 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-30 03:23:38.752025 | orchestrator | Friday 30 January 2026 03:23:32 +0000 (0:00:01.213) 0:02:09.359 ******** 2026-01-30 03:23:38.752035 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:38.752046 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:38.752057 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:38.752068 | orchestrator | 2026-01-30 03:23:38.752079 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-30 03:23:38.752093 | orchestrator | Friday 30 January 2026 03:23:34 +0000 (0:00:01.910) 0:02:11.270 ******** 2026-01-30 03:23:38.752105 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:38.752133 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:38.752147 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:38.752160 | orchestrator | 2026-01-30 03:23:38.752173 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-30 03:23:38.752203 | orchestrator | Friday 30 January 2026 03:23:34 +0000 (0:00:00.291) 0:02:11.561 ******** 2026-01-30 03:23:38.752217 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:38.752229 | orchestrator | 2026-01-30 03:23:38.752241 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-30 03:23:38.752254 | orchestrator | Friday 30 January 2026 03:23:35 +0000 (0:00:01.082) 0:02:12.643 ******** 2026-01-30 03:23:38.752269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 03:23:38.752286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:38.752301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 03:23:38.752323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:38.752345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 03:23:43.620025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:43.620132 | orchestrator | 2026-01-30 03:23:43.620150 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-30 03:23:43.620164 | orchestrator | Friday 30 January 2026 03:23:38 +0000 (0:00:02.958) 0:02:15.602 ******** 2026-01-30 03:23:43.620177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 03:23:43.620270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:43.620311 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:43.620332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 03:23:43.620365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:43.620377 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:43.620389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 03:23:43.620401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:23:43.620420 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:43.620431 | orchestrator | 2026-01-30 03:23:43.620442 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-30 03:23:43.620454 | orchestrator | Friday 30 January 2026 03:23:39 +0000 (0:00:00.624) 0:02:16.226 ******** 2026-01-30 03:23:43.620466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620492 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:43.620503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620526 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:43.620536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-30 03:23:43.620559 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:43.620569 | orchestrator | 2026-01-30 03:23:43.620656 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-30 03:23:43.620680 | orchestrator | Friday 30 January 2026 03:23:40 +0000 (0:00:00.827) 0:02:17.053 ******** 2026-01-30 03:23:43.620697 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:43.620715 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:43.620732 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:43.620749 | orchestrator | 2026-01-30 03:23:43.620766 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-30 03:23:43.620783 | orchestrator | Friday 30 January 2026 03:23:41 +0000 (0:00:01.521) 0:02:18.575 ******** 2026-01-30 03:23:43.620801 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:43.620819 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:43.620838 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:43.620856 | orchestrator | 2026-01-30 03:23:43.620874 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-30 03:23:43.620900 | orchestrator | Friday 30 January 2026 03:23:43 +0000 (0:00:01.894) 0:02:20.469 ******** 2026-01-30 03:23:47.715292 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:47.715398 | orchestrator | 2026-01-30 03:23:47.715413 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-30 03:23:47.715425 | orchestrator | Friday 30 January 2026 03:23:44 +0000 (0:00:00.975) 0:02:21.445 ******** 2026-01-30 03:23:47.715437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 03:23:47.715477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 03:23:47.715527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 03:23:47.715659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:47.715695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616052 | orchestrator | 2026-01-30 03:23:48.616156 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-30 03:23:48.616175 | orchestrator | Friday 30 January 2026 03:23:47 +0000 (0:00:03.207) 0:02:24.653 ******** 2026-01-30 03:23:48.616215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 03:23:48.616232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616273 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:48.616302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 03:23:48.616333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616376 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:48.616387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 03:23:48.616399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 03:23:48.616435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 03:23:58.960700 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:58.960823 | orchestrator | 2026-01-30 03:23:58.960841 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-30 03:23:58.960855 | orchestrator | Friday 30 January 2026 03:23:48 +0000 (0:00:00.899) 0:02:25.552 ******** 2026-01-30 03:23:58.960867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960893 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:58.960906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960929 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:58.960941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-30 03:23:58.960963 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:23:58.960974 | orchestrator | 2026-01-30 03:23:58.960985 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-30 03:23:58.960996 | orchestrator | Friday 30 January 2026 03:23:49 +0000 (0:00:00.846) 0:02:26.399 ******** 2026-01-30 03:23:58.961008 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:58.961019 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:58.961030 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:58.961041 | orchestrator | 2026-01-30 03:23:58.961052 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-30 03:23:58.961063 | orchestrator | Friday 30 January 2026 03:23:50 +0000 (0:00:01.253) 0:02:27.652 ******** 2026-01-30 03:23:58.961074 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:23:58.961085 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:23:58.961096 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:23:58.961107 | orchestrator | 2026-01-30 03:23:58.961118 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-30 03:23:58.961129 | orchestrator | Friday 30 January 2026 03:23:52 +0000 (0:00:01.904) 0:02:29.556 ******** 2026-01-30 03:23:58.961140 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:23:58.961153 | orchestrator | 2026-01-30 03:23:58.961165 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-30 03:23:58.961177 | orchestrator | Friday 30 January 2026 03:23:53 +0000 (0:00:01.198) 0:02:30.755 ******** 2026-01-30 03:23:58.961190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 03:23:58.961202 | orchestrator | 2026-01-30 03:23:58.961215 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-30 03:23:58.961250 | orchestrator | Friday 30 January 2026 03:23:56 +0000 (0:00:02.964) 0:02:33.720 ******** 2026-01-30 03:23:58.961304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:23:58.961324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:23:58.961338 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:23:58.961359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:23:58.961382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:23:58.961395 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:23:58.961418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:24:01.144382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:24:01.144483 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:01.144502 | orchestrator | 2026-01-30 03:24:01.144515 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-30 03:24:01.144528 | orchestrator | Friday 30 January 2026 03:23:58 +0000 (0:00:02.092) 0:02:35.812 ******** 2026-01-30 03:24:01.144628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:24:01.144662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:24:01.144678 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:01.144714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:24:01.144760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:24:01.144782 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:01.144801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:24:01.144831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 03:24:10.303598 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:10.303743 | orchestrator | 2026-01-30 03:24:10.303770 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-30 03:24:10.303792 | orchestrator | Friday 30 January 2026 03:24:01 +0000 (0:00:02.186) 0:02:37.998 ******** 2026-01-30 03:24:10.303815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.303869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.303910 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:10.303933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.303953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.303973 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:10.303993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.304015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 03:24:10.304036 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:10.304057 | orchestrator | 2026-01-30 03:24:10.304077 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-30 03:24:10.304096 | orchestrator | Friday 30 January 2026 03:24:03 +0000 (0:00:02.660) 0:02:40.659 ******** 2026-01-30 03:24:10.304115 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:10.304176 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:10.304192 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:10.304204 | orchestrator | 2026-01-30 03:24:10.304217 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-30 03:24:10.304230 | orchestrator | Friday 30 January 2026 03:24:05 +0000 (0:00:01.980) 0:02:42.640 ******** 2026-01-30 03:24:10.304242 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:10.304254 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:10.304267 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:10.304279 | orchestrator | 2026-01-30 03:24:10.304293 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-30 03:24:10.304306 | orchestrator | Friday 30 January 2026 03:24:07 +0000 (0:00:01.338) 0:02:43.979 ******** 2026-01-30 03:24:10.304318 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:10.304330 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:10.304343 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:10.304355 | orchestrator | 2026-01-30 03:24:10.304368 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-30 03:24:10.304380 | orchestrator | Friday 30 January 2026 03:24:07 +0000 (0:00:00.289) 0:02:44.268 ******** 2026-01-30 03:24:10.304394 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:24:10.304407 | orchestrator | 2026-01-30 03:24:10.304418 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-30 03:24:10.304429 | orchestrator | Friday 30 January 2026 03:24:08 +0000 (0:00:01.265) 0:02:45.534 ******** 2026-01-30 03:24:10.304449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 03:24:10.304465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 03:24:10.304477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 03:24:10.304488 | orchestrator | 2026-01-30 03:24:10.304500 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-30 03:24:10.304519 | orchestrator | Friday 30 January 2026 03:24:10 +0000 (0:00:01.423) 0:02:46.957 ******** 2026-01-30 03:24:10.304584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 03:24:17.926170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 03:24:17.926302 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:17.926331 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:17.926354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 03:24:17.926374 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:17.926394 | orchestrator | 2026-01-30 03:24:17.926417 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-30 03:24:17.926439 | orchestrator | Friday 30 January 2026 03:24:10 +0000 (0:00:00.398) 0:02:47.355 ******** 2026-01-30 03:24:17.926461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 03:24:17.926484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 03:24:17.926504 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:17.926576 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:17.926589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 03:24:17.926627 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:17.926642 | orchestrator | 2026-01-30 03:24:17.926696 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-30 03:24:17.926710 | orchestrator | Friday 30 January 2026 03:24:11 +0000 (0:00:00.775) 0:02:48.131 ******** 2026-01-30 03:24:17.926722 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:17.926736 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:17.926748 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:17.926761 | orchestrator | 2026-01-30 03:24:17.926773 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-30 03:24:17.926785 | orchestrator | Friday 30 January 2026 03:24:11 +0000 (0:00:00.432) 0:02:48.564 ******** 2026-01-30 03:24:17.926798 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:17.926810 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:17.926822 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:17.926834 | orchestrator | 2026-01-30 03:24:17.926847 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-30 03:24:17.926859 | orchestrator | Friday 30 January 2026 03:24:12 +0000 (0:00:01.136) 0:02:49.700 ******** 2026-01-30 03:24:17.926871 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:17.926884 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:17.926896 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:17.926907 | orchestrator | 2026-01-30 03:24:17.926920 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-30 03:24:17.926933 | orchestrator | Friday 30 January 2026 03:24:13 +0000 (0:00:00.296) 0:02:49.996 ******** 2026-01-30 03:24:17.926945 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:24:17.926956 | orchestrator | 2026-01-30 03:24:17.926967 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-30 03:24:17.926978 | orchestrator | Friday 30 January 2026 03:24:14 +0000 (0:00:01.325) 0:02:51.321 ******** 2026-01-30 03:24:17.927013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 03:24:17.927032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:17.927045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:17.927068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:17.927080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:17.927101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.062729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.062801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.062810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.062828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:18.062835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.062841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:18.062857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.062862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.062871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 03:24:18.062883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:18.062890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:18.062895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.062905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.167884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:18.168025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 03:24:18.168037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.168108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.168129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.168150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:18.168172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:18.294721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.294853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.294873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.294888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:18.294901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.294913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:18.294962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.294995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.295008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:18.295021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:18.295035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:18.295052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:18.295078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:19.376180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.376308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.376338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:19.376365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.376388 | orchestrator | 2026-01-30 03:24:19.376410 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-30 03:24:19.376466 | orchestrator | Friday 30 January 2026 03:24:18 +0000 (0:00:03.915) 0:02:55.237 ******** 2026-01-30 03:24:19.376564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 03:24:19.376620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.376642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.376663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.376684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:19.376733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.376758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.376793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.462755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.462906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.462936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 03:24:19.462993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.463034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.463081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:19.463104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.463124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.463144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.463176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.463205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:19.463240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:19.535896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.535996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 03:24:19.536035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.536050 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:19.536078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.536094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.536108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.536140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.536154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.536181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.536194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.536206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-30 03:24:19.536227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.746737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.746871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:19.746890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.746923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.746952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:19.746973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.746993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.747059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:19.747100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.747131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:19.747153 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:19.747175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:19.747197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-30 03:24:19.747221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-30 03:24:29.560836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 03:24:29.560974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 03:24:29.561007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 03:24:29.561020 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:29.561033 | orchestrator | 2026-01-30 03:24:29.561045 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-30 03:24:29.561057 | orchestrator | Friday 30 January 2026 03:24:19 +0000 (0:00:01.359) 0:02:56.596 ******** 2026-01-30 03:24:29.561068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561092 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:29.561102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561122 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:29.561133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-30 03:24:29.561162 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:29.561172 | orchestrator | 2026-01-30 03:24:29.561182 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-30 03:24:29.561192 | orchestrator | Friday 30 January 2026 03:24:21 +0000 (0:00:01.826) 0:02:58.422 ******** 2026-01-30 03:24:29.561203 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:29.561213 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:29.561240 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:29.561252 | orchestrator | 2026-01-30 03:24:29.561262 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-30 03:24:29.561272 | orchestrator | Friday 30 January 2026 03:24:22 +0000 (0:00:01.292) 0:02:59.714 ******** 2026-01-30 03:24:29.561282 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:29.561292 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:29.561302 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:29.561313 | orchestrator | 2026-01-30 03:24:29.561323 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-30 03:24:29.561333 | orchestrator | Friday 30 January 2026 03:24:24 +0000 (0:00:01.909) 0:03:01.624 ******** 2026-01-30 03:24:29.561343 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:24:29.561355 | orchestrator | 2026-01-30 03:24:29.561367 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-30 03:24:29.561378 | orchestrator | Friday 30 January 2026 03:24:25 +0000 (0:00:01.125) 0:03:02.750 ******** 2026-01-30 03:24:29.561392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:29.561411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:29.561424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:29.561443 | orchestrator | 2026-01-30 03:24:29.561455 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-30 03:24:29.561468 | orchestrator | Friday 30 January 2026 03:24:29 +0000 (0:00:03.190) 0:03:05.940 ******** 2026-01-30 03:24:29.561509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 03:24:39.012609 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:39.012723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 03:24:39.012743 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:39.012774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 03:24:39.012787 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:39.012799 | orchestrator | 2026-01-30 03:24:39.012811 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-30 03:24:39.012824 | orchestrator | Friday 30 January 2026 03:24:29 +0000 (0:00:00.474) 0:03:06.415 ******** 2026-01-30 03:24:39.012836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012886 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:39.012897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012919 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:39.012930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-30 03:24:39.012953 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:39.012964 | orchestrator | 2026-01-30 03:24:39.012976 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-30 03:24:39.012986 | orchestrator | Friday 30 January 2026 03:24:30 +0000 (0:00:00.709) 0:03:07.125 ******** 2026-01-30 03:24:39.012998 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:39.013008 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:39.013019 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:39.013030 | orchestrator | 2026-01-30 03:24:39.013041 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-30 03:24:39.013052 | orchestrator | Friday 30 January 2026 03:24:32 +0000 (0:00:01.755) 0:03:08.881 ******** 2026-01-30 03:24:39.013064 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:39.013075 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:39.013104 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:39.013118 | orchestrator | 2026-01-30 03:24:39.013131 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-30 03:24:39.013145 | orchestrator | Friday 30 January 2026 03:24:33 +0000 (0:00:01.746) 0:03:10.627 ******** 2026-01-30 03:24:39.013159 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:24:39.013171 | orchestrator | 2026-01-30 03:24:39.013182 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-30 03:24:39.013193 | orchestrator | Friday 30 January 2026 03:24:35 +0000 (0:00:01.443) 0:03:12.071 ******** 2026-01-30 03:24:39.013208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:39.013238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.013252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.013272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:39.886298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 03:24:39.886570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887355 | orchestrator | 2026-01-30 03:24:39.887369 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-30 03:24:39.887382 | orchestrator | Friday 30 January 2026 03:24:39 +0000 (0:00:03.794) 0:03:15.866 ******** 2026-01-30 03:24:39.887421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 03:24:39.887448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:39.887521 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:39.887535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 03:24:39.887556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:50.323692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:50.323807 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:50.323843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 03:24:50.323881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 03:24:50.323893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 03:24:50.323904 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:50.323915 | orchestrator | 2026-01-30 03:24:50.323926 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-30 03:24:50.323939 | orchestrator | Friday 30 January 2026 03:24:39 +0000 (0:00:00.873) 0:03:16.739 ******** 2026-01-30 03:24:50.323951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.323965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.323977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324020 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:24:50.324031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324120 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:24:50.324138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-30 03:24:50.324204 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:24:50.324216 | orchestrator | 2026-01-30 03:24:50.324227 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-30 03:24:50.324239 | orchestrator | Friday 30 January 2026 03:24:41 +0000 (0:00:01.156) 0:03:17.896 ******** 2026-01-30 03:24:50.324251 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:50.324262 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:50.324273 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:50.324284 | orchestrator | 2026-01-30 03:24:50.324295 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-30 03:24:50.324307 | orchestrator | Friday 30 January 2026 03:24:42 +0000 (0:00:01.429) 0:03:19.325 ******** 2026-01-30 03:24:50.324318 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:24:50.324329 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:24:50.324340 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:24:50.324350 | orchestrator | 2026-01-30 03:24:50.324360 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-30 03:24:50.324370 | orchestrator | Friday 30 January 2026 03:24:44 +0000 (0:00:01.978) 0:03:21.304 ******** 2026-01-30 03:24:50.324379 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:24:50.324389 | orchestrator | 2026-01-30 03:24:50.324399 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-30 03:24:50.324408 | orchestrator | Friday 30 January 2026 03:24:45 +0000 (0:00:01.465) 0:03:22.770 ******** 2026-01-30 03:24:50.324418 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-30 03:24:50.324430 | orchestrator | 2026-01-30 03:24:50.324468 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-30 03:24:50.324480 | orchestrator | Friday 30 January 2026 03:24:46 +0000 (0:00:00.773) 0:03:23.544 ******** 2026-01-30 03:24:50.324492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 03:24:50.324522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 03:25:01.599214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 03:25:01.599363 | orchestrator | 2026-01-30 03:25:01.599393 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-30 03:25:01.599478 | orchestrator | Friday 30 January 2026 03:24:50 +0000 (0:00:03.631) 0:03:27.175 ******** 2026-01-30 03:25:01.599505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.599528 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:01.599573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.599595 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:01.599616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.599637 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:01.599656 | orchestrator | 2026-01-30 03:25:01.599676 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-30 03:25:01.599702 | orchestrator | Friday 30 January 2026 03:24:51 +0000 (0:00:01.264) 0:03:28.440 ******** 2026-01-30 03:25:01.599726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599806 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:01.599827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599868 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:01.599888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 03:25:01.599956 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:01.599977 | orchestrator | 2026-01-30 03:25:01.599997 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 03:25:01.600018 | orchestrator | Friday 30 January 2026 03:24:52 +0000 (0:00:01.339) 0:03:29.779 ******** 2026-01-30 03:25:01.600038 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:25:01.600058 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:25:01.600078 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:25:01.600098 | orchestrator | 2026-01-30 03:25:01.600116 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 03:25:01.600135 | orchestrator | Friday 30 January 2026 03:24:55 +0000 (0:00:02.316) 0:03:32.095 ******** 2026-01-30 03:25:01.600153 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:25:01.600171 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:25:01.600188 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:25:01.600206 | orchestrator | 2026-01-30 03:25:01.600224 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-30 03:25:01.600242 | orchestrator | Friday 30 January 2026 03:24:58 +0000 (0:00:03.100) 0:03:35.195 ******** 2026-01-30 03:25:01.600261 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-30 03:25:01.600280 | orchestrator | 2026-01-30 03:25:01.600298 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-30 03:25:01.600317 | orchestrator | Friday 30 January 2026 03:24:59 +0000 (0:00:01.057) 0:03:36.253 ******** 2026-01-30 03:25:01.600349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.600369 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:01.600452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.600496 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:01.600552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.600573 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:01.600590 | orchestrator | 2026-01-30 03:25:01.600609 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-30 03:25:01.600628 | orchestrator | Friday 30 January 2026 03:25:00 +0000 (0:00:01.003) 0:03:37.257 ******** 2026-01-30 03:25:01.600648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.600666 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:01.600685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:01.600715 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:22.879721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 03:25:22.879839 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:22.879858 | orchestrator | 2026-01-30 03:25:22.879872 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-30 03:25:22.879885 | orchestrator | Friday 30 January 2026 03:25:01 +0000 (0:00:01.191) 0:03:38.448 ******** 2026-01-30 03:25:22.879898 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:22.879910 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:22.879921 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:22.879932 | orchestrator | 2026-01-30 03:25:22.879945 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 03:25:22.879964 | orchestrator | Friday 30 January 2026 03:25:02 +0000 (0:00:01.382) 0:03:39.830 ******** 2026-01-30 03:25:22.879983 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:25:22.880003 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:25:22.880021 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:25:22.880040 | orchestrator | 2026-01-30 03:25:22.880058 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 03:25:22.880078 | orchestrator | Friday 30 January 2026 03:25:05 +0000 (0:00:02.491) 0:03:42.321 ******** 2026-01-30 03:25:22.880122 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:25:22.880134 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:25:22.880145 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:25:22.880155 | orchestrator | 2026-01-30 03:25:22.880181 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-30 03:25:22.880193 | orchestrator | Friday 30 January 2026 03:25:07 +0000 (0:00:02.519) 0:03:44.841 ******** 2026-01-30 03:25:22.880205 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-30 03:25:22.880218 | orchestrator | 2026-01-30 03:25:22.880229 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-30 03:25:22.880240 | orchestrator | Friday 30 January 2026 03:25:09 +0000 (0:00:01.080) 0:03:45.922 ******** 2026-01-30 03:25:22.880255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880269 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:22.880282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880295 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:22.880307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880320 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:22.880332 | orchestrator | 2026-01-30 03:25:22.880345 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-30 03:25:22.880358 | orchestrator | Friday 30 January 2026 03:25:10 +0000 (0:00:01.185) 0:03:47.108 ******** 2026-01-30 03:25:22.880469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880486 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:22.880500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880524 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:22.880537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 03:25:22.880550 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:22.880563 | orchestrator | 2026-01-30 03:25:22.880582 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-30 03:25:22.880594 | orchestrator | Friday 30 January 2026 03:25:11 +0000 (0:00:01.193) 0:03:48.301 ******** 2026-01-30 03:25:22.880608 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:22.880620 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:22.880633 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:22.880644 | orchestrator | 2026-01-30 03:25:22.880655 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 03:25:22.880671 | orchestrator | Friday 30 January 2026 03:25:13 +0000 (0:00:01.652) 0:03:49.954 ******** 2026-01-30 03:25:22.880689 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:25:22.880705 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:25:22.880723 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:25:22.880741 | orchestrator | 2026-01-30 03:25:22.880759 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 03:25:22.880778 | orchestrator | Friday 30 January 2026 03:25:15 +0000 (0:00:02.209) 0:03:52.164 ******** 2026-01-30 03:25:22.880797 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:25:22.880816 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:25:22.880831 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:25:22.880841 | orchestrator | 2026-01-30 03:25:22.880852 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-30 03:25:22.880863 | orchestrator | Friday 30 January 2026 03:25:18 +0000 (0:00:02.933) 0:03:55.097 ******** 2026-01-30 03:25:22.880874 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:25:22.880885 | orchestrator | 2026-01-30 03:25:22.880896 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-30 03:25:22.880907 | orchestrator | Friday 30 January 2026 03:25:19 +0000 (0:00:01.255) 0:03:56.352 ******** 2026-01-30 03:25:22.880920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 03:25:22.880933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:22.880965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:23.564316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 03:25:23.564331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 03:25:23.564365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:23.564422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:23.564448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:23.564471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.564538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:23.564551 | orchestrator | 2026-01-30 03:25:23.564565 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-30 03:25:23.564578 | orchestrator | Friday 30 January 2026 03:25:22 +0000 (0:00:03.502) 0:03:59.854 ******** 2026-01-30 03:25:23.564600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 03:25:23.695683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:23.695795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.695815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.695829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:23.695863 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:23.695874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 03:25:23.695883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:23.695914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.695922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.695929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:23.695942 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:23.695949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 03:25:23.695956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 03:25:23.695963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 03:25:23.695980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 03:25:34.642102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 03:25:34.642205 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:34.642218 | orchestrator | 2026-01-30 03:25:34.642228 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-30 03:25:34.642238 | orchestrator | Friday 30 January 2026 03:25:23 +0000 (0:00:00.697) 0:04:00.552 ******** 2026-01-30 03:25:34.642247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642290 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:34.642298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642314 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:34.642322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 03:25:34.642337 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:34.642345 | orchestrator | 2026-01-30 03:25:34.642389 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-30 03:25:34.642398 | orchestrator | Friday 30 January 2026 03:25:24 +0000 (0:00:00.864) 0:04:01.417 ******** 2026-01-30 03:25:34.642405 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:25:34.642413 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:25:34.642421 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:25:34.642428 | orchestrator | 2026-01-30 03:25:34.642435 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-30 03:25:34.642443 | orchestrator | Friday 30 January 2026 03:25:26 +0000 (0:00:01.693) 0:04:03.110 ******** 2026-01-30 03:25:34.642449 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:25:34.642456 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:25:34.642464 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:25:34.642471 | orchestrator | 2026-01-30 03:25:34.642478 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-30 03:25:34.642485 | orchestrator | Friday 30 January 2026 03:25:28 +0000 (0:00:02.007) 0:04:05.118 ******** 2026-01-30 03:25:34.642491 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:25:34.642499 | orchestrator | 2026-01-30 03:25:34.642506 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-30 03:25:34.642514 | orchestrator | Friday 30 January 2026 03:25:29 +0000 (0:00:01.326) 0:04:06.444 ******** 2026-01-30 03:25:34.642538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:25:34.642568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:25:34.642587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:25:34.642596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:25:34.642610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:25:34.642627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:25:36.481039 | orchestrator | 2026-01-30 03:25:36.481139 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-30 03:25:36.481159 | orchestrator | Friday 30 January 2026 03:25:34 +0000 (0:00:05.048) 0:04:11.492 ******** 2026-01-30 03:25:36.481182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:25:36.481208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:25:36.481231 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:36.481264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:25:36.481277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:25:36.481331 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:36.481344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:25:36.481447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:25:36.481468 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:36.481480 | orchestrator | 2026-01-30 03:25:36.481492 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-30 03:25:36.481504 | orchestrator | Friday 30 January 2026 03:25:35 +0000 (0:00:00.988) 0:04:12.481 ******** 2026-01-30 03:25:36.481516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-30 03:25:36.481531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:36.481547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:36.481570 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:36.481590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-30 03:25:36.481603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:36.481617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:36.481629 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:36.481641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-30 03:25:36.481653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:36.481682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-30 03:25:42.336553 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:42.336629 | orchestrator | 2026-01-30 03:25:42.336636 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-30 03:25:42.336642 | orchestrator | Friday 30 January 2026 03:25:36 +0000 (0:00:00.848) 0:04:13.329 ******** 2026-01-30 03:25:42.336647 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:42.336652 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:42.336656 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:42.336660 | orchestrator | 2026-01-30 03:25:42.336665 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-30 03:25:42.336669 | orchestrator | Friday 30 January 2026 03:25:36 +0000 (0:00:00.428) 0:04:13.758 ******** 2026-01-30 03:25:42.336673 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:42.336677 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:42.336681 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:42.336685 | orchestrator | 2026-01-30 03:25:42.336689 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-30 03:25:42.336693 | orchestrator | Friday 30 January 2026 03:25:38 +0000 (0:00:01.527) 0:04:15.286 ******** 2026-01-30 03:25:42.336697 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:25:42.336702 | orchestrator | 2026-01-30 03:25:42.336706 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-30 03:25:42.336710 | orchestrator | Friday 30 January 2026 03:25:39 +0000 (0:00:01.565) 0:04:16.852 ******** 2026-01-30 03:25:42.336716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 03:25:42.336740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:42.336755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:42.336760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:42.336766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:42.336782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 03:25:42.336787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:42.336791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:42.336799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:42.336804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:42.336811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 03:25:42.336815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:42.336823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:43.910414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 03:25:43.910441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:43.910450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 03:25:43.910485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:43.910509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:43.910521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:43.910539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:43.910555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 03:25:44.579321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:44.579472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.579521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.579561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:44.579580 | orchestrator | 2026-01-30 03:25:44.579603 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-30 03:25:44.579623 | orchestrator | Friday 30 January 2026 03:25:44 +0000 (0:00:04.055) 0:04:20.907 ******** 2026-01-30 03:25:44.579676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-30 03:25:44.579696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:44.579789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.579816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.579833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:44.579856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-30 03:25:44.579871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:44.579891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-30 03:25:44.732146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:44.732260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:44.732315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:44.732396 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:44.732422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-30 03:25:44.732431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:44.732442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:44.732455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-30 03:25:44.732467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:44.732473 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:44.732489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 03:25:46.522953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:46.523065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:46.523105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 03:25:46.523124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-30 03:25:46.523139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-30 03:25:46.523178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:46.523214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 03:25:46.523227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 03:25:46.523240 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:46.523255 | orchestrator | 2026-01-30 03:25:46.523269 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-30 03:25:46.523283 | orchestrator | Friday 30 January 2026 03:25:44 +0000 (0:00:00.831) 0:04:21.739 ******** 2026-01-30 03:25:46.523304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:46.523408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:46.523422 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:46.523435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:46.523485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:46.523497 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:46.523511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-30 03:25:46.523539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:46.523563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-30 03:25:53.336791 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:53.336888 | orchestrator | 2026-01-30 03:25:53.336899 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-30 03:25:53.336910 | orchestrator | Friday 30 January 2026 03:25:46 +0000 (0:00:01.627) 0:04:23.366 ******** 2026-01-30 03:25:53.336922 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:53.336934 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:53.336944 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:53.336956 | orchestrator | 2026-01-30 03:25:53.336968 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-30 03:25:53.336979 | orchestrator | Friday 30 January 2026 03:25:46 +0000 (0:00:00.412) 0:04:23.779 ******** 2026-01-30 03:25:53.336991 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:53.337003 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:53.337014 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:53.337027 | orchestrator | 2026-01-30 03:25:53.337040 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-30 03:25:53.337052 | orchestrator | Friday 30 January 2026 03:25:48 +0000 (0:00:01.235) 0:04:25.014 ******** 2026-01-30 03:25:53.337064 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:25:53.337071 | orchestrator | 2026-01-30 03:25:53.337078 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-30 03:25:53.337090 | orchestrator | Friday 30 January 2026 03:25:49 +0000 (0:00:01.649) 0:04:26.664 ******** 2026-01-30 03:25:53.337105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:25:53.337152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:25:53.337209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:25:53.337223 | orchestrator | 2026-01-30 03:25:53.337235 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-30 03:25:53.337267 | orchestrator | Friday 30 January 2026 03:25:51 +0000 (0:00:02.016) 0:04:28.681 ******** 2026-01-30 03:25:53.337280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 03:25:53.337306 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:53.337341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 03:25:53.337355 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:53.337368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 03:25:53.337379 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:53.337390 | orchestrator | 2026-01-30 03:25:53.337402 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-30 03:25:53.337413 | orchestrator | Friday 30 January 2026 03:25:52 +0000 (0:00:00.335) 0:04:29.016 ******** 2026-01-30 03:25:53.337427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 03:25:53.337440 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:25:53.337452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 03:25:53.337463 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:25:53.337473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 03:25:53.337483 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:25:53.337493 | orchestrator | 2026-01-30 03:25:53.337504 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-30 03:25:53.337514 | orchestrator | Friday 30 January 2026 03:25:52 +0000 (0:00:00.538) 0:04:29.554 ******** 2026-01-30 03:25:53.337535 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:02.124968 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:02.125047 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:02.125054 | orchestrator | 2026-01-30 03:26:02.125060 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-30 03:26:02.125065 | orchestrator | Friday 30 January 2026 03:25:53 +0000 (0:00:00.638) 0:04:30.193 ******** 2026-01-30 03:26:02.125070 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:02.125089 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:02.125094 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:02.125097 | orchestrator | 2026-01-30 03:26:02.125101 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-30 03:26:02.125106 | orchestrator | Friday 30 January 2026 03:25:54 +0000 (0:00:01.076) 0:04:31.269 ******** 2026-01-30 03:26:02.125110 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:26:02.125114 | orchestrator | 2026-01-30 03:26:02.125118 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-30 03:26:02.125122 | orchestrator | Friday 30 January 2026 03:25:55 +0000 (0:00:01.358) 0:04:32.627 ******** 2026-01-30 03:26:02.125141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 03:26:02.125190 | orchestrator | 2026-01-30 03:26:02.125194 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-30 03:26:02.125199 | orchestrator | Friday 30 January 2026 03:26:01 +0000 (0:00:05.379) 0:04:38.007 ******** 2026-01-30 03:26:02.125204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 03:26:02.125212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 03:26:07.173135 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:07.173288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 03:26:07.173399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 03:26:07.173422 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:07.173442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 03:26:07.173462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 03:26:07.173501 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:07.173513 | orchestrator | 2026-01-30 03:26:07.173526 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-30 03:26:07.173539 | orchestrator | Friday 30 January 2026 03:26:02 +0000 (0:00:00.971) 0:04:38.978 ******** 2026-01-30 03:26:07.173577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173680 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:07.173698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173772 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:07.173791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-30 03:26:07.173871 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:07.173886 | orchestrator | 2026-01-30 03:26:07.173916 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-30 03:26:07.173935 | orchestrator | Friday 30 January 2026 03:26:02 +0000 (0:00:00.862) 0:04:39.841 ******** 2026-01-30 03:26:07.173954 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:26:07.173973 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:26:07.173990 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:26:07.174009 | orchestrator | 2026-01-30 03:26:07.174167 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-30 03:26:07.174181 | orchestrator | Friday 30 January 2026 03:26:04 +0000 (0:00:01.309) 0:04:41.150 ******** 2026-01-30 03:26:07.174192 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:26:07.174203 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:26:07.174214 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:26:07.174224 | orchestrator | 2026-01-30 03:26:07.174236 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-30 03:26:07.174247 | orchestrator | Friday 30 January 2026 03:26:06 +0000 (0:00:01.912) 0:04:43.062 ******** 2026-01-30 03:26:07.174258 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:07.174269 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:07.174280 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:07.174291 | orchestrator | 2026-01-30 03:26:07.174337 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-30 03:26:07.174349 | orchestrator | Friday 30 January 2026 03:26:06 +0000 (0:00:00.448) 0:04:43.511 ******** 2026-01-30 03:26:07.174360 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:07.174370 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:07.174381 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:07.174392 | orchestrator | 2026-01-30 03:26:07.174403 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-30 03:26:07.174414 | orchestrator | Friday 30 January 2026 03:26:06 +0000 (0:00:00.254) 0:04:43.766 ******** 2026-01-30 03:26:07.174425 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:07.174451 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.723287 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.723410 | orchestrator | 2026-01-30 03:26:51.723427 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-30 03:26:51.723441 | orchestrator | Friday 30 January 2026 03:26:07 +0000 (0:00:00.266) 0:04:44.032 ******** 2026-01-30 03:26:51.723452 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.723464 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.723475 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.723486 | orchestrator | 2026-01-30 03:26:51.723497 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-30 03:26:51.723509 | orchestrator | Friday 30 January 2026 03:26:07 +0000 (0:00:00.264) 0:04:44.297 ******** 2026-01-30 03:26:51.723520 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.723531 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.723542 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.723553 | orchestrator | 2026-01-30 03:26:51.723564 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-30 03:26:51.723592 | orchestrator | Friday 30 January 2026 03:26:07 +0000 (0:00:00.454) 0:04:44.751 ******** 2026-01-30 03:26:51.723605 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.723616 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.723627 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.723638 | orchestrator | 2026-01-30 03:26:51.723649 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-30 03:26:51.723660 | orchestrator | Friday 30 January 2026 03:26:08 +0000 (0:00:00.456) 0:04:45.208 ******** 2026-01-30 03:26:51.723671 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.723683 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.723694 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.723705 | orchestrator | 2026-01-30 03:26:51.723716 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-30 03:26:51.723756 | orchestrator | Friday 30 January 2026 03:26:09 +0000 (0:00:01.520) 0:04:46.728 ******** 2026-01-30 03:26:51.723768 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.723778 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.723792 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.723804 | orchestrator | 2026-01-30 03:26:51.723816 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-30 03:26:51.723829 | orchestrator | Friday 30 January 2026 03:26:10 +0000 (0:00:00.284) 0:04:47.013 ******** 2026-01-30 03:26:51.723851 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.723863 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.723875 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.723887 | orchestrator | 2026-01-30 03:26:51.723899 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-30 03:26:51.723911 | orchestrator | Friday 30 January 2026 03:26:11 +0000 (0:00:01.042) 0:04:48.055 ******** 2026-01-30 03:26:51.723924 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.723936 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.723948 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.723960 | orchestrator | 2026-01-30 03:26:51.723973 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-30 03:26:51.723985 | orchestrator | Friday 30 January 2026 03:26:11 +0000 (0:00:00.792) 0:04:48.847 ******** 2026-01-30 03:26:51.723997 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.724009 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.724022 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.724033 | orchestrator | 2026-01-30 03:26:51.724046 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-30 03:26:51.724059 | orchestrator | Friday 30 January 2026 03:26:12 +0000 (0:00:00.788) 0:04:49.636 ******** 2026-01-30 03:26:51.724072 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:26:51.724084 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:26:51.724097 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:26:51.724109 | orchestrator | 2026-01-30 03:26:51.724122 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-30 03:26:51.724135 | orchestrator | Friday 30 January 2026 03:26:22 +0000 (0:00:09.344) 0:04:58.980 ******** 2026-01-30 03:26:51.724148 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.724159 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.724169 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.724180 | orchestrator | 2026-01-30 03:26:51.724191 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-30 03:26:51.724202 | orchestrator | Friday 30 January 2026 03:26:23 +0000 (0:00:00.972) 0:04:59.952 ******** 2026-01-30 03:26:51.724213 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:26:51.724246 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:26:51.724257 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:26:51.724268 | orchestrator | 2026-01-30 03:26:51.724279 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-30 03:26:51.724290 | orchestrator | Friday 30 January 2026 03:26:33 +0000 (0:00:10.119) 0:05:10.072 ******** 2026-01-30 03:26:51.724301 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.724312 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.724322 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.724333 | orchestrator | 2026-01-30 03:26:51.724344 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-30 03:26:51.724355 | orchestrator | Friday 30 January 2026 03:26:37 +0000 (0:00:04.669) 0:05:14.742 ******** 2026-01-30 03:26:51.724366 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:26:51.724377 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:26:51.724388 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:26:51.724399 | orchestrator | 2026-01-30 03:26:51.724410 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-30 03:26:51.724421 | orchestrator | Friday 30 January 2026 03:26:46 +0000 (0:00:08.938) 0:05:23.680 ******** 2026-01-30 03:26:51.724444 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724456 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724466 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724477 | orchestrator | 2026-01-30 03:26:51.724488 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-30 03:26:51.724499 | orchestrator | Friday 30 January 2026 03:26:47 +0000 (0:00:00.615) 0:05:24.296 ******** 2026-01-30 03:26:51.724510 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724521 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724532 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724543 | orchestrator | 2026-01-30 03:26:51.724574 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-30 03:26:51.724586 | orchestrator | Friday 30 January 2026 03:26:47 +0000 (0:00:00.338) 0:05:24.634 ******** 2026-01-30 03:26:51.724596 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724607 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724618 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724629 | orchestrator | 2026-01-30 03:26:51.724640 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-30 03:26:51.724651 | orchestrator | Friday 30 January 2026 03:26:48 +0000 (0:00:00.347) 0:05:24.982 ******** 2026-01-30 03:26:51.724662 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724673 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724684 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724695 | orchestrator | 2026-01-30 03:26:51.724706 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-30 03:26:51.724717 | orchestrator | Friday 30 January 2026 03:26:48 +0000 (0:00:00.326) 0:05:25.308 ******** 2026-01-30 03:26:51.724727 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724744 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724755 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724766 | orchestrator | 2026-01-30 03:26:51.724777 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-30 03:26:51.724788 | orchestrator | Friday 30 January 2026 03:26:49 +0000 (0:00:00.617) 0:05:25.926 ******** 2026-01-30 03:26:51.724799 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:26:51.724810 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:26:51.724821 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:26:51.724831 | orchestrator | 2026-01-30 03:26:51.724843 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-30 03:26:51.724853 | orchestrator | Friday 30 January 2026 03:26:49 +0000 (0:00:00.349) 0:05:26.276 ******** 2026-01-30 03:26:51.724864 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.724875 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.724886 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.724897 | orchestrator | 2026-01-30 03:26:51.724908 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-30 03:26:51.724919 | orchestrator | Friday 30 January 2026 03:26:50 +0000 (0:00:00.854) 0:05:27.130 ******** 2026-01-30 03:26:51.724930 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:26:51.724941 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:26:51.724952 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:26:51.724962 | orchestrator | 2026-01-30 03:26:51.724973 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:26:51.724986 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-30 03:26:51.724999 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-30 03:26:51.725010 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-30 03:26:51.725020 | orchestrator | 2026-01-30 03:26:51.725039 | orchestrator | 2026-01-30 03:26:51.725050 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:26:51.725061 | orchestrator | Friday 30 January 2026 03:26:51 +0000 (0:00:00.769) 0:05:27.900 ******** 2026-01-30 03:26:51.725072 | orchestrator | =============================================================================== 2026-01-30 03:26:51.725083 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.12s 2026-01-30 03:26:51.725093 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.34s 2026-01-30 03:26:51.725104 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.94s 2026-01-30 03:26:51.725115 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.38s 2026-01-30 03:26:51.725126 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.05s 2026-01-30 03:26:51.725137 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.67s 2026-01-30 03:26:51.725148 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.06s 2026-01-30 03:26:51.725158 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.92s 2026-01-30 03:26:51.725169 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.79s 2026-01-30 03:26:51.725180 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.69s 2026-01-30 03:26:51.725191 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.63s 2026-01-30 03:26:51.725202 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.50s 2026-01-30 03:26:51.725213 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.24s 2026-01-30 03:26:51.725243 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.21s 2026-01-30 03:26:51.725254 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.19s 2026-01-30 03:26:51.725265 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.19s 2026-01-30 03:26:51.725276 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.14s 2026-01-30 03:26:51.725287 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.13s 2026-01-30 03:26:51.725298 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.10s 2026-01-30 03:26:51.725309 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.09s 2026-01-30 03:26:53.900090 | orchestrator | 2026-01-30 03:26:53 | INFO  | Task e4432a8a-8f33-4d82-81fd-56ccfe485179 (opensearch) was prepared for execution. 2026-01-30 03:26:53.900192 | orchestrator | 2026-01-30 03:26:53 | INFO  | It takes a moment until task e4432a8a-8f33-4d82-81fd-56ccfe485179 (opensearch) has been started and output is visible here. 2026-01-30 03:27:04.513883 | orchestrator | 2026-01-30 03:27:04.514121 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:27:04.514160 | orchestrator | 2026-01-30 03:27:04.514179 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:27:04.514255 | orchestrator | Friday 30 January 2026 03:26:57 +0000 (0:00:00.192) 0:00:00.192 ******** 2026-01-30 03:27:04.514276 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:27:04.514296 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:27:04.514314 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:27:04.514333 | orchestrator | 2026-01-30 03:27:04.514352 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:27:04.514371 | orchestrator | Friday 30 January 2026 03:26:57 +0000 (0:00:00.214) 0:00:00.406 ******** 2026-01-30 03:27:04.514409 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-30 03:27:04.514423 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-30 03:27:04.514437 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-30 03:27:04.514450 | orchestrator | 2026-01-30 03:27:04.514463 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-30 03:27:04.514500 | orchestrator | 2026-01-30 03:27:04.514513 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 03:27:04.514526 | orchestrator | Friday 30 January 2026 03:26:58 +0000 (0:00:00.312) 0:00:00.719 ******** 2026-01-30 03:27:04.514539 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:27:04.514551 | orchestrator | 2026-01-30 03:27:04.514563 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-30 03:27:04.514576 | orchestrator | Friday 30 January 2026 03:26:58 +0000 (0:00:00.409) 0:00:01.128 ******** 2026-01-30 03:27:04.514588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:27:04.514602 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:27:04.514616 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 03:27:04.514628 | orchestrator | 2026-01-30 03:27:04.514641 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-30 03:27:04.514653 | orchestrator | Friday 30 January 2026 03:27:00 +0000 (0:00:01.661) 0:00:02.789 ******** 2026-01-30 03:27:04.514671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:04.514689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:04.514725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:04.514747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:04.514771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:04.514784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:04.514797 | orchestrator | 2026-01-30 03:27:04.514808 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 03:27:04.514819 | orchestrator | Friday 30 January 2026 03:27:01 +0000 (0:00:01.411) 0:00:04.200 ******** 2026-01-30 03:27:04.514831 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:27:04.514842 | orchestrator | 2026-01-30 03:27:04.514853 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-30 03:27:04.514864 | orchestrator | Friday 30 January 2026 03:27:02 +0000 (0:00:00.456) 0:00:04.657 ******** 2026-01-30 03:27:04.514894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:05.242696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:05.242811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:05.242831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:05.242847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:05.242922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:05.242938 | orchestrator | 2026-01-30 03:27:05.242952 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-30 03:27:05.242965 | orchestrator | Friday 30 January 2026 03:27:04 +0000 (0:00:02.292) 0:00:06.949 ******** 2026-01-30 03:27:05.242978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:05.242991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:05.243003 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:27:05.243017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:05.243051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:06.211176 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:27:06.211341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:06.211365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:06.211379 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:27:06.211391 | orchestrator | 2026-01-30 03:27:06.211403 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-30 03:27:06.211417 | orchestrator | Friday 30 January 2026 03:27:05 +0000 (0:00:00.732) 0:00:07.682 ******** 2026-01-30 03:27:06.211456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:06.211484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:06.211518 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:27:06.211530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:06.211543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:06.211555 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:27:06.211574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-30 03:27:06.211592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-30 03:27:06.211604 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:27:06.211615 | orchestrator | 2026-01-30 03:27:06.211627 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-30 03:27:06.211646 | orchestrator | Friday 30 January 2026 03:27:06 +0000 (0:00:00.962) 0:00:08.644 ******** 2026-01-30 03:27:13.839308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:13.839423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:13.839442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:13.839497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:13.839534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:13.839548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:27:13.839570 | orchestrator | 2026-01-30 03:27:13.839583 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-30 03:27:13.839596 | orchestrator | Friday 30 January 2026 03:27:08 +0000 (0:00:02.188) 0:00:10.832 ******** 2026-01-30 03:27:13.839607 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:27:13.839620 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:27:13.839631 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:27:13.839642 | orchestrator | 2026-01-30 03:27:13.839653 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-30 03:27:13.839664 | orchestrator | Friday 30 January 2026 03:27:10 +0000 (0:00:02.125) 0:00:12.957 ******** 2026-01-30 03:27:13.839675 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:27:13.839686 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:27:13.839697 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:27:13.839707 | orchestrator | 2026-01-30 03:27:13.839718 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-30 03:27:13.839729 | orchestrator | Friday 30 January 2026 03:27:12 +0000 (0:00:01.715) 0:00:14.673 ******** 2026-01-30 03:27:13.839741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:13.839759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:27:13.839782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-30 03:30:03.308123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:30:03.308257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:30:03.308287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-30 03:30:03.308297 | orchestrator | 2026-01-30 03:30:03.308308 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 03:30:03.308318 | orchestrator | Friday 30 January 2026 03:27:13 +0000 (0:00:01.602) 0:00:16.276 ******** 2026-01-30 03:30:03.308326 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:30:03.308336 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:30:03.308344 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:30:03.308352 | orchestrator | 2026-01-30 03:30:03.308360 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 03:30:03.308369 | orchestrator | Friday 30 January 2026 03:27:14 +0000 (0:00:00.283) 0:00:16.560 ******** 2026-01-30 03:30:03.308377 | orchestrator | 2026-01-30 03:30:03.308385 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 03:30:03.308393 | orchestrator | Friday 30 January 2026 03:27:14 +0000 (0:00:00.063) 0:00:16.623 ******** 2026-01-30 03:30:03.308401 | orchestrator | 2026-01-30 03:30:03.308409 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 03:30:03.308426 | orchestrator | Friday 30 January 2026 03:27:14 +0000 (0:00:00.061) 0:00:16.684 ******** 2026-01-30 03:30:03.308434 | orchestrator | 2026-01-30 03:30:03.308442 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-30 03:30:03.308465 | orchestrator | Friday 30 January 2026 03:27:14 +0000 (0:00:00.062) 0:00:16.747 ******** 2026-01-30 03:30:03.308474 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:30:03.308482 | orchestrator | 2026-01-30 03:30:03.308490 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-30 03:30:03.308498 | orchestrator | Friday 30 January 2026 03:27:14 +0000 (0:00:00.211) 0:00:16.959 ******** 2026-01-30 03:30:03.308505 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:30:03.308513 | orchestrator | 2026-01-30 03:30:03.308521 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-30 03:30:03.308529 | orchestrator | Friday 30 January 2026 03:27:15 +0000 (0:00:00.549) 0:00:17.508 ******** 2026-01-30 03:30:03.308537 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:03.308545 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:03.308553 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:03.308563 | orchestrator | 2026-01-30 03:30:03.308572 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-30 03:30:03.308581 | orchestrator | Friday 30 January 2026 03:28:28 +0000 (0:01:13.588) 0:01:31.097 ******** 2026-01-30 03:30:03.308591 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:03.308601 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:03.308610 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:03.308619 | orchestrator | 2026-01-30 03:30:03.308628 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 03:30:03.308637 | orchestrator | Friday 30 January 2026 03:29:52 +0000 (0:01:23.497) 0:02:54.594 ******** 2026-01-30 03:30:03.308648 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:30:03.308657 | orchestrator | 2026-01-30 03:30:03.308666 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-30 03:30:03.308676 | orchestrator | Friday 30 January 2026 03:29:52 +0000 (0:00:00.485) 0:02:55.079 ******** 2026-01-30 03:30:03.308685 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:30:03.308694 | orchestrator | 2026-01-30 03:30:03.308703 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-30 03:30:03.308712 | orchestrator | Friday 30 January 2026 03:29:55 +0000 (0:00:02.749) 0:02:57.829 ******** 2026-01-30 03:30:03.308722 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:30:03.308731 | orchestrator | 2026-01-30 03:30:03.308740 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-30 03:30:03.308750 | orchestrator | Friday 30 January 2026 03:29:57 +0000 (0:00:02.329) 0:03:00.158 ******** 2026-01-30 03:30:03.308758 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:03.308765 | orchestrator | 2026-01-30 03:30:03.308774 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-30 03:30:03.308782 | orchestrator | Friday 30 January 2026 03:30:00 +0000 (0:00:02.915) 0:03:03.073 ******** 2026-01-30 03:30:03.308789 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:03.308798 | orchestrator | 2026-01-30 03:30:03.308806 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:30:03.308815 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 03:30:03.308824 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 03:30:03.308836 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 03:30:03.308844 | orchestrator | 2026-01-30 03:30:03.308852 | orchestrator | 2026-01-30 03:30:03.308866 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:30:03.308874 | orchestrator | Friday 30 January 2026 03:30:03 +0000 (0:00:02.649) 0:03:05.723 ******** 2026-01-30 03:30:03.308882 | orchestrator | =============================================================================== 2026-01-30 03:30:03.308890 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.50s 2026-01-30 03:30:03.308898 | orchestrator | opensearch : Restart opensearch container ------------------------------ 73.59s 2026-01-30 03:30:03.308906 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.92s 2026-01-30 03:30:03.308913 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.75s 2026-01-30 03:30:03.308921 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.65s 2026-01-30 03:30:03.308929 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.33s 2026-01-30 03:30:03.308937 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.29s 2026-01-30 03:30:03.308969 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.19s 2026-01-30 03:30:03.308978 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.13s 2026-01-30 03:30:03.308985 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.72s 2026-01-30 03:30:03.308993 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.66s 2026-01-30 03:30:03.309001 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.60s 2026-01-30 03:30:03.309009 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.41s 2026-01-30 03:30:03.309017 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.96s 2026-01-30 03:30:03.309025 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.73s 2026-01-30 03:30:03.309033 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.55s 2026-01-30 03:30:03.309046 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-01-30 03:30:03.585098 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2026-01-30 03:30:03.585195 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.41s 2026-01-30 03:30:03.585209 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.31s 2026-01-30 03:30:05.905240 | orchestrator | 2026-01-30 03:30:05 | INFO  | Task c0cfa904-dd75-40a4-8aea-4aa490c62fd9 (memcached) was prepared for execution. 2026-01-30 03:30:05.905319 | orchestrator | 2026-01-30 03:30:05 | INFO  | It takes a moment until task c0cfa904-dd75-40a4-8aea-4aa490c62fd9 (memcached) has been started and output is visible here. 2026-01-30 03:30:16.380706 | orchestrator | 2026-01-30 03:30:16.380854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:30:16.380873 | orchestrator | 2026-01-30 03:30:16.380886 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:30:16.380967 | orchestrator | Friday 30 January 2026 03:30:09 +0000 (0:00:00.223) 0:00:00.223 ******** 2026-01-30 03:30:16.380984 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:30:16.380997 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:30:16.381008 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:30:16.381020 | orchestrator | 2026-01-30 03:30:16.381031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:30:16.381042 | orchestrator | Friday 30 January 2026 03:30:09 +0000 (0:00:00.237) 0:00:00.461 ******** 2026-01-30 03:30:16.381055 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-30 03:30:16.381066 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-30 03:30:16.381077 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-30 03:30:16.381088 | orchestrator | 2026-01-30 03:30:16.381099 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-30 03:30:16.381139 | orchestrator | 2026-01-30 03:30:16.381232 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-30 03:30:16.381248 | orchestrator | Friday 30 January 2026 03:30:10 +0000 (0:00:00.326) 0:00:00.787 ******** 2026-01-30 03:30:16.381262 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:30:16.381276 | orchestrator | 2026-01-30 03:30:16.381288 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-30 03:30:16.381301 | orchestrator | Friday 30 January 2026 03:30:10 +0000 (0:00:00.422) 0:00:01.210 ******** 2026-01-30 03:30:16.381313 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-30 03:30:16.381326 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-30 03:30:16.381339 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-30 03:30:16.381358 | orchestrator | 2026-01-30 03:30:16.381382 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-30 03:30:16.381409 | orchestrator | Friday 30 January 2026 03:30:11 +0000 (0:00:00.579) 0:00:01.789 ******** 2026-01-30 03:30:16.381428 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-30 03:30:16.381446 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-30 03:30:16.381463 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-30 03:30:16.381481 | orchestrator | 2026-01-30 03:30:16.381498 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-30 03:30:16.381514 | orchestrator | Friday 30 January 2026 03:30:12 +0000 (0:00:01.461) 0:00:03.251 ******** 2026-01-30 03:30:16.381552 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:16.381570 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:16.381588 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:16.381604 | orchestrator | 2026-01-30 03:30:16.381621 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-30 03:30:16.381638 | orchestrator | Friday 30 January 2026 03:30:14 +0000 (0:00:01.361) 0:00:04.613 ******** 2026-01-30 03:30:16.381655 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:16.381672 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:16.381689 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:16.381707 | orchestrator | 2026-01-30 03:30:16.381726 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:30:16.381744 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:16.381763 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:16.381783 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:16.381797 | orchestrator | 2026-01-30 03:30:16.381808 | orchestrator | 2026-01-30 03:30:16.381819 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:30:16.381830 | orchestrator | Friday 30 January 2026 03:30:16 +0000 (0:00:02.008) 0:00:06.622 ******** 2026-01-30 03:30:16.381841 | orchestrator | =============================================================================== 2026-01-30 03:30:16.381852 | orchestrator | memcached : Restart memcached container --------------------------------- 2.01s 2026-01-30 03:30:16.381863 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.46s 2026-01-30 03:30:16.381875 | orchestrator | memcached : Check memcached container ----------------------------------- 1.36s 2026-01-30 03:30:16.381885 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.58s 2026-01-30 03:30:16.381896 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.42s 2026-01-30 03:30:16.381907 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2026-01-30 03:30:16.381918 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2026-01-30 03:30:18.555175 | orchestrator | 2026-01-30 03:30:18 | INFO  | Task 7fa3eb1c-001d-438b-a2fd-16b0a2e2c22a (redis) was prepared for execution. 2026-01-30 03:30:18.555292 | orchestrator | 2026-01-30 03:30:18 | INFO  | It takes a moment until task 7fa3eb1c-001d-438b-a2fd-16b0a2e2c22a (redis) has been started and output is visible here. 2026-01-30 03:30:27.077259 | orchestrator | 2026-01-30 03:30:27.077404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:30:27.077439 | orchestrator | 2026-01-30 03:30:27.077466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:30:27.077485 | orchestrator | Friday 30 January 2026 03:30:22 +0000 (0:00:00.241) 0:00:00.241 ******** 2026-01-30 03:30:27.077505 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:30:27.077524 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:30:27.077543 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:30:27.077562 | orchestrator | 2026-01-30 03:30:27.077582 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:30:27.077601 | orchestrator | Friday 30 January 2026 03:30:22 +0000 (0:00:00.284) 0:00:00.526 ******** 2026-01-30 03:30:27.077619 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-30 03:30:27.077638 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-30 03:30:27.077657 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-30 03:30:27.077677 | orchestrator | 2026-01-30 03:30:27.077696 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-30 03:30:27.077715 | orchestrator | 2026-01-30 03:30:27.077735 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-30 03:30:27.077754 | orchestrator | Friday 30 January 2026 03:30:23 +0000 (0:00:00.397) 0:00:00.923 ******** 2026-01-30 03:30:27.077773 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:30:27.077792 | orchestrator | 2026-01-30 03:30:27.077810 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-30 03:30:27.077827 | orchestrator | Friday 30 January 2026 03:30:23 +0000 (0:00:00.458) 0:00:01.381 ******** 2026-01-30 03:30:27.077851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.077881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.077904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.077996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078165 | orchestrator | 2026-01-30 03:30:27.078177 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-30 03:30:27.078188 | orchestrator | Friday 30 January 2026 03:30:24 +0000 (0:00:01.051) 0:00:02.432 ******** 2026-01-30 03:30:27.078200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:27.078317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117384 | orchestrator | 2026-01-30 03:30:31.117402 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-30 03:30:31.117416 | orchestrator | Friday 30 January 2026 03:30:27 +0000 (0:00:02.343) 0:00:04.776 ******** 2026-01-30 03:30:31.117430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117561 | orchestrator | 2026-01-30 03:30:31.117572 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-30 03:30:31.117584 | orchestrator | Friday 30 January 2026 03:30:29 +0000 (0:00:02.363) 0:00:07.140 ******** 2026-01-30 03:30:31.117596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:31.117683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 03:30:42.005013 | orchestrator | 2026-01-30 03:30:42.005110 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 03:30:42.005125 | orchestrator | Friday 30 January 2026 03:30:30 +0000 (0:00:01.476) 0:00:08.617 ******** 2026-01-30 03:30:42.005135 | orchestrator | 2026-01-30 03:30:42.005145 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 03:30:42.005154 | orchestrator | Friday 30 January 2026 03:30:30 +0000 (0:00:00.067) 0:00:08.684 ******** 2026-01-30 03:30:42.005163 | orchestrator | 2026-01-30 03:30:42.005172 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 03:30:42.005181 | orchestrator | Friday 30 January 2026 03:30:31 +0000 (0:00:00.065) 0:00:08.750 ******** 2026-01-30 03:30:42.005189 | orchestrator | 2026-01-30 03:30:42.005198 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-30 03:30:42.005207 | orchestrator | Friday 30 January 2026 03:30:31 +0000 (0:00:00.064) 0:00:08.814 ******** 2026-01-30 03:30:42.005216 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:42.005227 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:42.005236 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:42.005245 | orchestrator | 2026-01-30 03:30:42.005254 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-30 03:30:42.005263 | orchestrator | Friday 30 January 2026 03:30:33 +0000 (0:00:02.736) 0:00:11.550 ******** 2026-01-30 03:30:42.005293 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:42.005303 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:30:42.005311 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:30:42.005320 | orchestrator | 2026-01-30 03:30:42.005329 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:30:42.005339 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:42.005348 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:42.005368 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:30:42.005377 | orchestrator | 2026-01-30 03:30:42.005386 | orchestrator | 2026-01-30 03:30:42.005395 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:30:42.005404 | orchestrator | Friday 30 January 2026 03:30:41 +0000 (0:00:07.962) 0:00:19.513 ******** 2026-01-30 03:30:42.005412 | orchestrator | =============================================================================== 2026-01-30 03:30:42.005421 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.96s 2026-01-30 03:30:42.005430 | orchestrator | redis : Restart redis container ----------------------------------------- 2.74s 2026-01-30 03:30:42.005438 | orchestrator | redis : Copying over redis config files --------------------------------- 2.36s 2026-01-30 03:30:42.005447 | orchestrator | redis : Copying over default config.json files -------------------------- 2.34s 2026-01-30 03:30:42.005455 | orchestrator | redis : Check redis containers ------------------------------------------ 1.48s 2026-01-30 03:30:42.005464 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.05s 2026-01-30 03:30:42.005473 | orchestrator | redis : include_tasks --------------------------------------------------- 0.46s 2026-01-30 03:30:42.005481 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-01-30 03:30:42.005490 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-01-30 03:30:42.005498 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-01-30 03:30:43.894421 | orchestrator | 2026-01-30 03:30:43 | INFO  | Task d2551759-f98a-4aef-85bf-9ace081af341 (mariadb) was prepared for execution. 2026-01-30 03:30:43.894514 | orchestrator | 2026-01-30 03:30:43 | INFO  | It takes a moment until task d2551759-f98a-4aef-85bf-9ace081af341 (mariadb) has been started and output is visible here. 2026-01-30 03:30:55.338997 | orchestrator | 2026-01-30 03:30:55.339080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:30:55.339088 | orchestrator | 2026-01-30 03:30:55.339093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:30:55.339099 | orchestrator | Friday 30 January 2026 03:30:47 +0000 (0:00:00.118) 0:00:00.118 ******** 2026-01-30 03:30:55.339105 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:30:55.339114 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:30:55.339121 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:30:55.339129 | orchestrator | 2026-01-30 03:30:55.339137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:30:55.339145 | orchestrator | Friday 30 January 2026 03:30:47 +0000 (0:00:00.229) 0:00:00.348 ******** 2026-01-30 03:30:55.339153 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-30 03:30:55.339161 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-30 03:30:55.339169 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-30 03:30:55.339176 | orchestrator | 2026-01-30 03:30:55.339183 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-30 03:30:55.339190 | orchestrator | 2026-01-30 03:30:55.339198 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-30 03:30:55.339230 | orchestrator | Friday 30 January 2026 03:30:48 +0000 (0:00:00.408) 0:00:00.757 ******** 2026-01-30 03:30:55.339238 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 03:30:55.339246 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 03:30:55.339253 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 03:30:55.339260 | orchestrator | 2026-01-30 03:30:55.339268 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 03:30:55.339274 | orchestrator | Friday 30 January 2026 03:30:48 +0000 (0:00:00.310) 0:00:01.068 ******** 2026-01-30 03:30:55.339281 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:30:55.339290 | orchestrator | 2026-01-30 03:30:55.339297 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-30 03:30:55.339305 | orchestrator | Friday 30 January 2026 03:30:48 +0000 (0:00:00.414) 0:00:01.483 ******** 2026-01-30 03:30:55.339330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:30:55.339358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:30:55.339374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:30:55.339380 | orchestrator | 2026-01-30 03:30:55.339385 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-30 03:30:55.339389 | orchestrator | Friday 30 January 2026 03:30:50 +0000 (0:00:02.128) 0:00:03.611 ******** 2026-01-30 03:30:55.339394 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:30:55.339400 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:30:55.339405 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:55.339410 | orchestrator | 2026-01-30 03:30:55.339414 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-30 03:30:55.339419 | orchestrator | Friday 30 January 2026 03:30:51 +0000 (0:00:00.509) 0:00:04.121 ******** 2026-01-30 03:30:55.339423 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:30:55.339428 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:30:55.339433 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:30:55.339437 | orchestrator | 2026-01-30 03:30:55.339442 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-30 03:30:55.339447 | orchestrator | Friday 30 January 2026 03:30:52 +0000 (0:00:01.295) 0:00:05.416 ******** 2026-01-30 03:30:55.339456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:31:02.051441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:31:02.051557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:31:02.051599 | orchestrator | 2026-01-30 03:31:02.051614 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-30 03:31:02.051627 | orchestrator | Friday 30 January 2026 03:30:55 +0000 (0:00:02.612) 0:00:08.028 ******** 2026-01-30 03:31:02.051639 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:31:02.051652 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:31:02.051663 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:31:02.051674 | orchestrator | 2026-01-30 03:31:02.051686 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-30 03:31:02.051715 | orchestrator | Friday 30 January 2026 03:30:56 +0000 (0:00:01.006) 0:00:09.035 ******** 2026-01-30 03:31:02.051726 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:31:02.051738 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:31:02.051748 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:31:02.051760 | orchestrator | 2026-01-30 03:31:02.051771 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 03:31:02.051783 | orchestrator | Friday 30 January 2026 03:30:59 +0000 (0:00:03.151) 0:00:12.187 ******** 2026-01-30 03:31:02.051795 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:31:02.051806 | orchestrator | 2026-01-30 03:31:02.051818 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-30 03:31:02.051829 | orchestrator | Friday 30 January 2026 03:30:59 +0000 (0:00:00.477) 0:00:12.665 ******** 2026-01-30 03:31:02.051848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:02.051871 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:31:02.051961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:06.531489 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:31:06.531594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:06.531632 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:31:06.531643 | orchestrator | 2026-01-30 03:31:06.531653 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-30 03:31:06.531664 | orchestrator | Friday 30 January 2026 03:31:02 +0000 (0:00:02.077) 0:00:14.743 ******** 2026-01-30 03:31:06.531675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:06.531684 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:31:06.531715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:06.531732 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:31:06.531742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:06.531752 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:31:06.531761 | orchestrator | 2026-01-30 03:31:06.531770 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-30 03:31:06.531779 | orchestrator | Friday 30 January 2026 03:31:04 +0000 (0:00:02.257) 0:00:17.000 ******** 2026-01-30 03:31:06.531800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:09.130478 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:31:09.130596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:09.130618 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:31:09.130650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 03:31:09.130715 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:31:09.130751 | orchestrator | 2026-01-30 03:31:09.130773 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-30 03:31:09.130787 | orchestrator | Friday 30 January 2026 03:31:06 +0000 (0:00:02.222) 0:00:19.222 ******** 2026-01-30 03:31:09.130821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:31:09.130835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:31:09.130920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 03:33:15.689540 | orchestrator | 2026-01-30 03:33:15.689648 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-30 03:33:15.689662 | orchestrator | Friday 30 January 2026 03:31:09 +0000 (0:00:02.601) 0:00:21.824 ******** 2026-01-30 03:33:15.689669 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:15.689679 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:33:15.689686 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:33:15.689692 | orchestrator | 2026-01-30 03:33:15.689699 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-30 03:33:15.689706 | orchestrator | Friday 30 January 2026 03:31:09 +0000 (0:00:00.780) 0:00:22.604 ******** 2026-01-30 03:33:15.689713 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.689721 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.689728 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.689735 | orchestrator | 2026-01-30 03:33:15.689786 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-30 03:33:15.689794 | orchestrator | Friday 30 January 2026 03:31:10 +0000 (0:00:00.462) 0:00:23.066 ******** 2026-01-30 03:33:15.689801 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.689807 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.689814 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.689820 | orchestrator | 2026-01-30 03:33:15.689826 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-30 03:33:15.689833 | orchestrator | Friday 30 January 2026 03:31:10 +0000 (0:00:00.310) 0:00:23.377 ******** 2026-01-30 03:33:15.689842 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-30 03:33:15.689850 | orchestrator | ...ignoring 2026-01-30 03:33:15.689858 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-30 03:33:15.689864 | orchestrator | ...ignoring 2026-01-30 03:33:15.689871 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-30 03:33:15.689878 | orchestrator | ...ignoring 2026-01-30 03:33:15.689910 | orchestrator | 2026-01-30 03:33:15.689918 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-30 03:33:15.689925 | orchestrator | Friday 30 January 2026 03:31:21 +0000 (0:00:10.903) 0:00:34.281 ******** 2026-01-30 03:33:15.689931 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.689938 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.689944 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.689951 | orchestrator | 2026-01-30 03:33:15.689957 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-30 03:33:15.689964 | orchestrator | Friday 30 January 2026 03:31:21 +0000 (0:00:00.387) 0:00:34.668 ******** 2026-01-30 03:33:15.689971 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.689978 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.689985 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.689992 | orchestrator | 2026-01-30 03:33:15.689999 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-30 03:33:15.690006 | orchestrator | Friday 30 January 2026 03:31:22 +0000 (0:00:00.579) 0:00:35.248 ******** 2026-01-30 03:33:15.690013 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690062 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690070 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690078 | orchestrator | 2026-01-30 03:33:15.690099 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-30 03:33:15.690108 | orchestrator | Friday 30 January 2026 03:31:22 +0000 (0:00:00.393) 0:00:35.641 ******** 2026-01-30 03:33:15.690116 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690123 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690131 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690139 | orchestrator | 2026-01-30 03:33:15.690147 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-30 03:33:15.690155 | orchestrator | Friday 30 January 2026 03:31:23 +0000 (0:00:00.427) 0:00:36.068 ******** 2026-01-30 03:33:15.690162 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690171 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.690179 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.690186 | orchestrator | 2026-01-30 03:33:15.690194 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-30 03:33:15.690203 | orchestrator | Friday 30 January 2026 03:31:23 +0000 (0:00:00.404) 0:00:36.472 ******** 2026-01-30 03:33:15.690210 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690217 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690225 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690232 | orchestrator | 2026-01-30 03:33:15.690239 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 03:33:15.690246 | orchestrator | Friday 30 January 2026 03:31:24 +0000 (0:00:00.552) 0:00:37.025 ******** 2026-01-30 03:33:15.690253 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690260 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690267 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-30 03:33:15.690275 | orchestrator | 2026-01-30 03:33:15.690282 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-30 03:33:15.690289 | orchestrator | Friday 30 January 2026 03:31:24 +0000 (0:00:00.354) 0:00:37.379 ******** 2026-01-30 03:33:15.690296 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:15.690304 | orchestrator | 2026-01-30 03:33:15.690311 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-30 03:33:15.690318 | orchestrator | Friday 30 January 2026 03:31:34 +0000 (0:00:10.091) 0:00:47.471 ******** 2026-01-30 03:33:15.690325 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690333 | orchestrator | 2026-01-30 03:33:15.690340 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 03:33:15.690348 | orchestrator | Friday 30 January 2026 03:31:34 +0000 (0:00:00.118) 0:00:47.589 ******** 2026-01-30 03:33:15.690354 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690386 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690393 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690399 | orchestrator | 2026-01-30 03:33:15.690405 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-30 03:33:15.690411 | orchestrator | Friday 30 January 2026 03:31:35 +0000 (0:00:00.921) 0:00:48.511 ******** 2026-01-30 03:33:15.690417 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:15.690424 | orchestrator | 2026-01-30 03:33:15.690431 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-30 03:33:15.690437 | orchestrator | Friday 30 January 2026 03:31:42 +0000 (0:00:06.947) 0:00:55.459 ******** 2026-01-30 03:33:15.690444 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690450 | orchestrator | 2026-01-30 03:33:15.690457 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-30 03:33:15.690464 | orchestrator | Friday 30 January 2026 03:31:45 +0000 (0:00:02.540) 0:00:58.000 ******** 2026-01-30 03:33:15.690471 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690477 | orchestrator | 2026-01-30 03:33:15.690484 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-30 03:33:15.690491 | orchestrator | Friday 30 January 2026 03:31:47 +0000 (0:00:02.346) 0:01:00.347 ******** 2026-01-30 03:33:15.690497 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:15.690503 | orchestrator | 2026-01-30 03:33:15.690509 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-30 03:33:15.690516 | orchestrator | Friday 30 January 2026 03:31:47 +0000 (0:00:00.113) 0:01:00.460 ******** 2026-01-30 03:33:15.690523 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690529 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:15.690536 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:15.690542 | orchestrator | 2026-01-30 03:33:15.690548 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-30 03:33:15.690555 | orchestrator | Friday 30 January 2026 03:31:48 +0000 (0:00:00.299) 0:01:00.760 ******** 2026-01-30 03:33:15.690562 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:15.690568 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-30 03:33:15.690574 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:33:15.690581 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:33:15.690588 | orchestrator | 2026-01-30 03:33:15.690595 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-30 03:33:15.690602 | orchestrator | skipping: no hosts matched 2026-01-30 03:33:15.690608 | orchestrator | 2026-01-30 03:33:15.690615 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-30 03:33:15.690621 | orchestrator | 2026-01-30 03:33:15.690628 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 03:33:15.690635 | orchestrator | Friday 30 January 2026 03:31:48 +0000 (0:00:00.466) 0:01:01.226 ******** 2026-01-30 03:33:15.690642 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:33:15.690649 | orchestrator | 2026-01-30 03:33:15.690656 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 03:33:15.690663 | orchestrator | Friday 30 January 2026 03:32:04 +0000 (0:00:15.634) 0:01:16.861 ******** 2026-01-30 03:33:15.690670 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.690677 | orchestrator | 2026-01-30 03:33:15.690684 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 03:33:15.690690 | orchestrator | Friday 30 January 2026 03:32:20 +0000 (0:00:16.551) 0:01:33.412 ******** 2026-01-30 03:33:15.690697 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:15.690704 | orchestrator | 2026-01-30 03:33:15.690715 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-30 03:33:15.690723 | orchestrator | 2026-01-30 03:33:15.690736 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 03:33:15.690780 | orchestrator | Friday 30 January 2026 03:32:22 +0000 (0:00:02.243) 0:01:35.655 ******** 2026-01-30 03:33:15.690794 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:33:15.690801 | orchestrator | 2026-01-30 03:33:15.690807 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 03:33:15.690814 | orchestrator | Friday 30 January 2026 03:32:42 +0000 (0:00:19.894) 0:01:55.550 ******** 2026-01-30 03:33:15.690820 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.690826 | orchestrator | 2026-01-30 03:33:15.690832 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 03:33:15.690838 | orchestrator | Friday 30 January 2026 03:32:54 +0000 (0:00:11.555) 0:02:07.106 ******** 2026-01-30 03:33:15.690844 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:15.690850 | orchestrator | 2026-01-30 03:33:15.690856 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-30 03:33:15.690863 | orchestrator | 2026-01-30 03:33:15.690869 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 03:33:15.690876 | orchestrator | Friday 30 January 2026 03:32:56 +0000 (0:00:02.292) 0:02:09.398 ******** 2026-01-30 03:33:15.690882 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:15.690889 | orchestrator | 2026-01-30 03:33:15.690895 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 03:33:15.690901 | orchestrator | Friday 30 January 2026 03:33:07 +0000 (0:00:10.385) 0:02:19.784 ******** 2026-01-30 03:33:15.690908 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690915 | orchestrator | 2026-01-30 03:33:15.690922 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 03:33:15.690928 | orchestrator | Friday 30 January 2026 03:33:12 +0000 (0:00:05.611) 0:02:25.395 ******** 2026-01-30 03:33:15.690934 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:15.690941 | orchestrator | 2026-01-30 03:33:15.690948 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-30 03:33:15.690955 | orchestrator | 2026-01-30 03:33:15.690961 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-30 03:33:15.690968 | orchestrator | Friday 30 January 2026 03:33:15 +0000 (0:00:02.375) 0:02:27.770 ******** 2026-01-30 03:33:15.690975 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:33:15.690983 | orchestrator | 2026-01-30 03:33:15.690991 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-30 03:33:15.691008 | orchestrator | Friday 30 January 2026 03:33:15 +0000 (0:00:00.611) 0:02:28.382 ******** 2026-01-30 03:33:28.506394 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:28.506559 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:28.506588 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:28.506608 | orchestrator | 2026-01-30 03:33:28.506628 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-30 03:33:28.506648 | orchestrator | Friday 30 January 2026 03:33:18 +0000 (0:00:02.446) 0:02:30.828 ******** 2026-01-30 03:33:28.506666 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:28.506684 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:28.506703 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:28.506722 | orchestrator | 2026-01-30 03:33:28.506852 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-30 03:33:28.506872 | orchestrator | Friday 30 January 2026 03:33:20 +0000 (0:00:02.275) 0:02:33.103 ******** 2026-01-30 03:33:28.506890 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:28.506908 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:28.506927 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:28.506947 | orchestrator | 2026-01-30 03:33:28.506967 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-30 03:33:28.506986 | orchestrator | Friday 30 January 2026 03:33:22 +0000 (0:00:02.518) 0:02:35.621 ******** 2026-01-30 03:33:28.507005 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:28.507024 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:28.507043 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:28.507062 | orchestrator | 2026-01-30 03:33:28.507126 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-30 03:33:28.507147 | orchestrator | Friday 30 January 2026 03:33:25 +0000 (0:00:02.348) 0:02:37.970 ******** 2026-01-30 03:33:28.507166 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:28.507182 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:28.507195 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:28.507208 | orchestrator | 2026-01-30 03:33:28.507220 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-30 03:33:28.507233 | orchestrator | Friday 30 January 2026 03:33:27 +0000 (0:00:02.664) 0:02:40.634 ******** 2026-01-30 03:33:28.507245 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:28.507258 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:28.507270 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:28.507283 | orchestrator | 2026-01-30 03:33:28.507294 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:33:28.507306 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-30 03:33:28.507319 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-30 03:33:28.507331 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-30 03:33:28.507341 | orchestrator | 2026-01-30 03:33:28.507352 | orchestrator | 2026-01-30 03:33:28.507363 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:33:28.507374 | orchestrator | Friday 30 January 2026 03:33:28 +0000 (0:00:00.192) 0:02:40.827 ******** 2026-01-30 03:33:28.507385 | orchestrator | =============================================================================== 2026-01-30 03:33:28.507412 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.53s 2026-01-30 03:33:28.507423 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.11s 2026-01-30 03:33:28.507434 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2026-01-30 03:33:28.507445 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.39s 2026-01-30 03:33:28.507455 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.09s 2026-01-30 03:33:28.507466 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.95s 2026-01-30 03:33:28.507477 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2026-01-30 03:33:28.507488 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.54s 2026-01-30 03:33:28.507499 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.15s 2026-01-30 03:33:28.507510 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.66s 2026-01-30 03:33:28.507523 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.61s 2026-01-30 03:33:28.507541 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.60s 2026-01-30 03:33:28.507557 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.54s 2026-01-30 03:33:28.507571 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.52s 2026-01-30 03:33:28.507586 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.45s 2026-01-30 03:33:28.507607 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.38s 2026-01-30 03:33:28.507634 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.35s 2026-01-30 03:33:28.507652 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.35s 2026-01-30 03:33:28.507669 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.28s 2026-01-30 03:33:28.507686 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.26s 2026-01-30 03:33:30.684074 | orchestrator | 2026-01-30 03:33:30 | INFO  | Task 1d3dba42-0f85-4915-b9b8-8c50b294aebe (rabbitmq) was prepared for execution. 2026-01-30 03:33:30.684153 | orchestrator | 2026-01-30 03:33:30 | INFO  | It takes a moment until task 1d3dba42-0f85-4915-b9b8-8c50b294aebe (rabbitmq) has been started and output is visible here. 2026-01-30 03:33:42.387853 | orchestrator | 2026-01-30 03:33:42.387946 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:33:42.387957 | orchestrator | 2026-01-30 03:33:42.387966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:33:42.387974 | orchestrator | Friday 30 January 2026 03:33:34 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-01-30 03:33:42.387982 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:42.387990 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:33:42.387998 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:33:42.388006 | orchestrator | 2026-01-30 03:33:42.388013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:33:42.388021 | orchestrator | Friday 30 January 2026 03:33:34 +0000 (0:00:00.252) 0:00:00.417 ******** 2026-01-30 03:33:42.388028 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-30 03:33:42.388036 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-30 03:33:42.388044 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-30 03:33:42.388051 | orchestrator | 2026-01-30 03:33:42.388058 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-30 03:33:42.388066 | orchestrator | 2026-01-30 03:33:42.388074 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 03:33:42.388081 | orchestrator | Friday 30 January 2026 03:33:35 +0000 (0:00:00.425) 0:00:00.842 ******** 2026-01-30 03:33:42.388089 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:33:42.388098 | orchestrator | 2026-01-30 03:33:42.388105 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-30 03:33:42.388113 | orchestrator | Friday 30 January 2026 03:33:35 +0000 (0:00:00.413) 0:00:01.256 ******** 2026-01-30 03:33:42.388120 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:42.388127 | orchestrator | 2026-01-30 03:33:42.388135 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-30 03:33:42.388142 | orchestrator | Friday 30 January 2026 03:33:36 +0000 (0:00:00.855) 0:00:02.112 ******** 2026-01-30 03:33:42.388149 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388158 | orchestrator | 2026-01-30 03:33:42.388166 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-30 03:33:42.388173 | orchestrator | Friday 30 January 2026 03:33:36 +0000 (0:00:00.341) 0:00:02.453 ******** 2026-01-30 03:33:42.388181 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388188 | orchestrator | 2026-01-30 03:33:42.388195 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-30 03:33:42.388203 | orchestrator | Friday 30 January 2026 03:33:37 +0000 (0:00:00.362) 0:00:02.815 ******** 2026-01-30 03:33:42.388210 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388217 | orchestrator | 2026-01-30 03:33:42.388225 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-30 03:33:42.388232 | orchestrator | Friday 30 January 2026 03:33:37 +0000 (0:00:00.354) 0:00:03.170 ******** 2026-01-30 03:33:42.388240 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388247 | orchestrator | 2026-01-30 03:33:42.388254 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 03:33:42.388262 | orchestrator | Friday 30 January 2026 03:33:37 +0000 (0:00:00.429) 0:00:03.599 ******** 2026-01-30 03:33:42.388283 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:33:42.388309 | orchestrator | 2026-01-30 03:33:42.388317 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-30 03:33:42.388324 | orchestrator | Friday 30 January 2026 03:33:38 +0000 (0:00:00.690) 0:00:04.289 ******** 2026-01-30 03:33:42.388332 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:33:42.388339 | orchestrator | 2026-01-30 03:33:42.388346 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-30 03:33:42.388354 | orchestrator | Friday 30 January 2026 03:33:39 +0000 (0:00:00.763) 0:00:05.053 ******** 2026-01-30 03:33:42.388361 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388368 | orchestrator | 2026-01-30 03:33:42.388375 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-30 03:33:42.388383 | orchestrator | Friday 30 January 2026 03:33:39 +0000 (0:00:00.356) 0:00:05.409 ******** 2026-01-30 03:33:42.388390 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:42.388397 | orchestrator | 2026-01-30 03:33:42.388405 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-30 03:33:42.388412 | orchestrator | Friday 30 January 2026 03:33:39 +0000 (0:00:00.345) 0:00:05.755 ******** 2026-01-30 03:33:42.388438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:42.388450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:42.388459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:42.388473 | orchestrator | 2026-01-30 03:33:42.388485 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-30 03:33:42.388492 | orchestrator | Friday 30 January 2026 03:33:40 +0000 (0:00:00.810) 0:00:06.565 ******** 2026-01-30 03:33:42.388500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:42.388515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:59.881781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:59.881908 | orchestrator | 2026-01-30 03:33:59.881928 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-30 03:33:59.881942 | orchestrator | Friday 30 January 2026 03:33:42 +0000 (0:00:01.564) 0:00:08.129 ******** 2026-01-30 03:33:59.881982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 03:33:59.881995 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 03:33:59.882007 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 03:33:59.882073 | orchestrator | 2026-01-30 03:33:59.882086 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-30 03:33:59.882097 | orchestrator | Friday 30 January 2026 03:33:43 +0000 (0:00:01.372) 0:00:09.502 ******** 2026-01-30 03:33:59.882124 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 03:33:59.882136 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 03:33:59.882147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 03:33:59.882158 | orchestrator | 2026-01-30 03:33:59.882170 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-30 03:33:59.882181 | orchestrator | Friday 30 January 2026 03:33:45 +0000 (0:00:01.537) 0:00:11.040 ******** 2026-01-30 03:33:59.882192 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 03:33:59.882203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 03:33:59.882214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 03:33:59.882225 | orchestrator | 2026-01-30 03:33:59.882237 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-30 03:33:59.882250 | orchestrator | Friday 30 January 2026 03:33:46 +0000 (0:00:01.299) 0:00:12.339 ******** 2026-01-30 03:33:59.882262 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 03:33:59.882275 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 03:33:59.882287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 03:33:59.882299 | orchestrator | 2026-01-30 03:33:59.882312 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-30 03:33:59.882324 | orchestrator | Friday 30 January 2026 03:33:48 +0000 (0:00:01.557) 0:00:13.896 ******** 2026-01-30 03:33:59.882336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 03:33:59.882349 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 03:33:59.882362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 03:33:59.882373 | orchestrator | 2026-01-30 03:33:59.882384 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-30 03:33:59.882396 | orchestrator | Friday 30 January 2026 03:33:49 +0000 (0:00:01.391) 0:00:15.288 ******** 2026-01-30 03:33:59.882407 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 03:33:59.882418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 03:33:59.882429 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 03:33:59.882440 | orchestrator | 2026-01-30 03:33:59.882451 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 03:33:59.882462 | orchestrator | Friday 30 January 2026 03:33:50 +0000 (0:00:01.256) 0:00:16.545 ******** 2026-01-30 03:33:59.882473 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:33:59.882486 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:33:59.882516 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:33:59.882538 | orchestrator | 2026-01-30 03:33:59.882549 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-30 03:33:59.882560 | orchestrator | Friday 30 January 2026 03:33:51 +0000 (0:00:00.343) 0:00:16.888 ******** 2026-01-30 03:33:59.882573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:59.882592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:59.882605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 03:33:59.882617 | orchestrator | 2026-01-30 03:33:59.882629 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-30 03:33:59.882640 | orchestrator | Friday 30 January 2026 03:33:52 +0000 (0:00:01.019) 0:00:17.908 ******** 2026-01-30 03:33:59.882651 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:59.882663 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:33:59.882674 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:33:59.882685 | orchestrator | 2026-01-30 03:33:59.882695 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-30 03:33:59.882743 | orchestrator | Friday 30 January 2026 03:33:52 +0000 (0:00:00.760) 0:00:18.668 ******** 2026-01-30 03:33:59.882755 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:33:59.882766 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:33:59.882777 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:33:59.882788 | orchestrator | 2026-01-30 03:33:59.882799 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-30 03:33:59.882818 | orchestrator | Friday 30 January 2026 03:33:59 +0000 (0:00:06.955) 0:00:25.623 ******** 2026-01-30 03:35:39.594672 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:35:39.594779 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:35:39.594795 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:35:39.594814 | orchestrator | 2026-01-30 03:35:39.594832 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 03:35:39.594846 | orchestrator | 2026-01-30 03:35:39.594859 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 03:35:39.594872 | orchestrator | Friday 30 January 2026 03:34:00 +0000 (0:00:00.439) 0:00:26.063 ******** 2026-01-30 03:35:39.594887 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:35:39.594901 | orchestrator | 2026-01-30 03:35:39.594916 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 03:35:39.594932 | orchestrator | Friday 30 January 2026 03:34:00 +0000 (0:00:00.607) 0:00:26.670 ******** 2026-01-30 03:35:39.594948 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:35:39.594962 | orchestrator | 2026-01-30 03:35:39.594973 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 03:35:39.594981 | orchestrator | Friday 30 January 2026 03:34:01 +0000 (0:00:00.220) 0:00:26.891 ******** 2026-01-30 03:35:39.594990 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:35:39.594998 | orchestrator | 2026-01-30 03:35:39.595006 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 03:35:39.595014 | orchestrator | Friday 30 January 2026 03:34:02 +0000 (0:00:01.693) 0:00:28.585 ******** 2026-01-30 03:35:39.595022 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:35:39.595034 | orchestrator | 2026-01-30 03:35:39.595047 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 03:35:39.595066 | orchestrator | 2026-01-30 03:35:39.595079 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 03:35:39.595092 | orchestrator | Friday 30 January 2026 03:34:59 +0000 (0:00:56.457) 0:01:25.043 ******** 2026-01-30 03:35:39.595104 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:35:39.595117 | orchestrator | 2026-01-30 03:35:39.595129 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 03:35:39.595142 | orchestrator | Friday 30 January 2026 03:34:59 +0000 (0:00:00.625) 0:01:25.668 ******** 2026-01-30 03:35:39.595156 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:35:39.595170 | orchestrator | 2026-01-30 03:35:39.595183 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 03:35:39.595197 | orchestrator | Friday 30 January 2026 03:35:00 +0000 (0:00:00.212) 0:01:25.881 ******** 2026-01-30 03:35:39.595211 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:35:39.595225 | orchestrator | 2026-01-30 03:35:39.595239 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 03:35:39.595271 | orchestrator | Friday 30 January 2026 03:35:01 +0000 (0:00:01.562) 0:01:27.444 ******** 2026-01-30 03:35:39.595285 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:35:39.595300 | orchestrator | 2026-01-30 03:35:39.595314 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 03:35:39.595328 | orchestrator | 2026-01-30 03:35:39.595341 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 03:35:39.595355 | orchestrator | Friday 30 January 2026 03:35:18 +0000 (0:00:16.754) 0:01:44.198 ******** 2026-01-30 03:35:39.595369 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:35:39.595383 | orchestrator | 2026-01-30 03:35:39.595422 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 03:35:39.595432 | orchestrator | Friday 30 January 2026 03:35:19 +0000 (0:00:00.695) 0:01:44.894 ******** 2026-01-30 03:35:39.595442 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:35:39.595451 | orchestrator | 2026-01-30 03:35:39.595460 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 03:35:39.595469 | orchestrator | Friday 30 January 2026 03:35:19 +0000 (0:00:00.202) 0:01:45.096 ******** 2026-01-30 03:35:39.595478 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:35:39.595488 | orchestrator | 2026-01-30 03:35:39.595497 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 03:35:39.595506 | orchestrator | Friday 30 January 2026 03:35:25 +0000 (0:00:06.496) 0:01:51.593 ******** 2026-01-30 03:35:39.595515 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:35:39.595524 | orchestrator | 2026-01-30 03:35:39.595533 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-30 03:35:39.595542 | orchestrator | 2026-01-30 03:35:39.595550 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-30 03:35:39.595558 | orchestrator | Friday 30 January 2026 03:35:36 +0000 (0:00:10.560) 0:02:02.153 ******** 2026-01-30 03:35:39.595566 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:35:39.595574 | orchestrator | 2026-01-30 03:35:39.595582 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-30 03:35:39.595590 | orchestrator | Friday 30 January 2026 03:35:36 +0000 (0:00:00.474) 0:02:02.628 ******** 2026-01-30 03:35:39.595598 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-30 03:35:39.595606 | orchestrator | enable_outward_rabbitmq_True 2026-01-30 03:35:39.595614 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-30 03:35:39.595622 | orchestrator | outward_rabbitmq_restart 2026-01-30 03:35:39.595655 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:35:39.595664 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:35:39.595672 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:35:39.595680 | orchestrator | 2026-01-30 03:35:39.595688 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-30 03:35:39.595696 | orchestrator | skipping: no hosts matched 2026-01-30 03:35:39.595704 | orchestrator | 2026-01-30 03:35:39.595712 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-30 03:35:39.595720 | orchestrator | skipping: no hosts matched 2026-01-30 03:35:39.595728 | orchestrator | 2026-01-30 03:35:39.595736 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-30 03:35:39.595744 | orchestrator | skipping: no hosts matched 2026-01-30 03:35:39.595752 | orchestrator | 2026-01-30 03:35:39.595760 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:35:39.595785 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-30 03:35:39.595796 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:35:39.595804 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:35:39.595812 | orchestrator | 2026-01-30 03:35:39.595820 | orchestrator | 2026-01-30 03:35:39.595828 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:35:39.595836 | orchestrator | Friday 30 January 2026 03:35:39 +0000 (0:00:02.422) 0:02:05.050 ******** 2026-01-30 03:35:39.595844 | orchestrator | =============================================================================== 2026-01-30 03:35:39.595852 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.77s 2026-01-30 03:35:39.595860 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.75s 2026-01-30 03:35:39.595875 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.96s 2026-01-30 03:35:39.595883 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2026-01-30 03:35:39.595891 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2026-01-30 03:35:39.595899 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.56s 2026-01-30 03:35:39.595907 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.56s 2026-01-30 03:35:39.595915 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.54s 2026-01-30 03:35:39.595923 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.39s 2026-01-30 03:35:39.595930 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.37s 2026-01-30 03:35:39.595938 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.30s 2026-01-30 03:35:39.595946 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.26s 2026-01-30 03:35:39.595954 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.02s 2026-01-30 03:35:39.595962 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.86s 2026-01-30 03:35:39.595976 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-01-30 03:35:39.595984 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.76s 2026-01-30 03:35:39.595992 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.76s 2026-01-30 03:35:39.596000 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.69s 2026-01-30 03:35:39.596008 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.64s 2026-01-30 03:35:39.596016 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.47s 2026-01-30 03:35:42.068842 | orchestrator | 2026-01-30 03:35:42 | INFO  | Task 36f5999e-6396-4fe3-a852-0468bc7bf004 (openvswitch) was prepared for execution. 2026-01-30 03:35:42.068931 | orchestrator | 2026-01-30 03:35:42 | INFO  | It takes a moment until task 36f5999e-6396-4fe3-a852-0468bc7bf004 (openvswitch) has been started and output is visible here. 2026-01-30 03:35:53.611749 | orchestrator | 2026-01-30 03:35:53.611872 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:35:53.611893 | orchestrator | 2026-01-30 03:35:53.611902 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:35:53.611911 | orchestrator | Friday 30 January 2026 03:35:46 +0000 (0:00:00.226) 0:00:00.226 ******** 2026-01-30 03:35:53.611919 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:35:53.611929 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:35:53.611937 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:35:53.611945 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:35:53.611953 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:35:53.611960 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:35:53.611968 | orchestrator | 2026-01-30 03:35:53.611976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:35:53.611984 | orchestrator | Friday 30 January 2026 03:35:46 +0000 (0:00:00.472) 0:00:00.698 ******** 2026-01-30 03:35:53.611992 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612001 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612009 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612017 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612025 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612033 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 03:35:53.612041 | orchestrator | 2026-01-30 03:35:53.612069 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-30 03:35:53.612077 | orchestrator | 2026-01-30 03:35:53.612086 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-30 03:35:53.612094 | orchestrator | Friday 30 January 2026 03:35:46 +0000 (0:00:00.448) 0:00:01.146 ******** 2026-01-30 03:35:53.612103 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:35:53.612111 | orchestrator | 2026-01-30 03:35:53.612119 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-30 03:35:53.612127 | orchestrator | Friday 30 January 2026 03:35:47 +0000 (0:00:00.897) 0:00:02.044 ******** 2026-01-30 03:35:53.612135 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-30 03:35:53.612144 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-30 03:35:53.612152 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-30 03:35:53.612159 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-30 03:35:53.612167 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-30 03:35:53.612175 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-30 03:35:53.612183 | orchestrator | 2026-01-30 03:35:53.612191 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-30 03:35:53.612199 | orchestrator | Friday 30 January 2026 03:35:49 +0000 (0:00:01.296) 0:00:03.340 ******** 2026-01-30 03:35:53.612207 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-30 03:35:53.612215 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-30 03:35:53.612222 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-30 03:35:53.612230 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-30 03:35:53.612238 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-30 03:35:53.612246 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-30 03:35:53.612253 | orchestrator | 2026-01-30 03:35:53.612261 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-30 03:35:53.612269 | orchestrator | Friday 30 January 2026 03:35:50 +0000 (0:00:01.412) 0:00:04.753 ******** 2026-01-30 03:35:53.612277 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-30 03:35:53.612285 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:35:53.612294 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-30 03:35:53.612303 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:35:53.612312 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-30 03:35:53.612322 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:35:53.612330 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-30 03:35:53.612339 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:35:53.612348 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-30 03:35:53.612357 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:35:53.612365 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-30 03:35:53.612374 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:35:53.612383 | orchestrator | 2026-01-30 03:35:53.612392 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-30 03:35:53.612401 | orchestrator | Friday 30 January 2026 03:35:51 +0000 (0:00:01.135) 0:00:05.888 ******** 2026-01-30 03:35:53.612410 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:35:53.612419 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:35:53.612428 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:35:53.612441 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:35:53.612455 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:35:53.612468 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:35:53.612481 | orchestrator | 2026-01-30 03:35:53.612494 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-30 03:35:53.612516 | orchestrator | Friday 30 January 2026 03:35:52 +0000 (0:00:00.685) 0:00:06.573 ******** 2026-01-30 03:35:53.612553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:53.612576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:53.612591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:53.612738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:53.612765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:53.612784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129573 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129780 | orchestrator | 2026-01-30 03:35:56.129801 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-30 03:35:56.129823 | orchestrator | Friday 30 January 2026 03:35:53 +0000 (0:00:01.308) 0:00:07.882 ******** 2026-01-30 03:35:56.129845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:56.129997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692894 | orchestrator | 2026-01-30 03:35:58.692900 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-30 03:35:58.692905 | orchestrator | Friday 30 January 2026 03:35:56 +0000 (0:00:02.521) 0:00:10.403 ******** 2026-01-30 03:35:58.692909 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:35:58.692914 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:35:58.692918 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:35:58.692922 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:35:58.692926 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:35:58.692930 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:35:58.692934 | orchestrator | 2026-01-30 03:35:58.692938 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-30 03:35:58.692942 | orchestrator | Friday 30 January 2026 03:35:57 +0000 (0:00:00.872) 0:00:11.276 ******** 2026-01-30 03:35:58.692946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:35:58.692978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714450 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 03:36:23.714473 | orchestrator | 2026-01-30 03:36:23.714486 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714497 | orchestrator | Friday 30 January 2026 03:35:58 +0000 (0:00:01.691) 0:00:12.967 ******** 2026-01-30 03:36:23.714506 | orchestrator | 2026-01-30 03:36:23.714516 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714525 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.295) 0:00:13.263 ******** 2026-01-30 03:36:23.714547 | orchestrator | 2026-01-30 03:36:23.714556 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714565 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.128) 0:00:13.391 ******** 2026-01-30 03:36:23.714574 | orchestrator | 2026-01-30 03:36:23.714583 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714592 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.125) 0:00:13.516 ******** 2026-01-30 03:36:23.714654 | orchestrator | 2026-01-30 03:36:23.714664 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714673 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.123) 0:00:13.639 ******** 2026-01-30 03:36:23.714682 | orchestrator | 2026-01-30 03:36:23.714691 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 03:36:23.714701 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.125) 0:00:13.764 ******** 2026-01-30 03:36:23.714711 | orchestrator | 2026-01-30 03:36:23.714720 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-30 03:36:23.714730 | orchestrator | Friday 30 January 2026 03:35:59 +0000 (0:00:00.124) 0:00:13.889 ******** 2026-01-30 03:36:23.714741 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:36:23.714752 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:36:23.714761 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:36:23.714771 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:36:23.714781 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:36:23.714790 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:36:23.714800 | orchestrator | 2026-01-30 03:36:23.714811 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-30 03:36:23.714822 | orchestrator | Friday 30 January 2026 03:36:08 +0000 (0:00:08.586) 0:00:22.475 ******** 2026-01-30 03:36:23.714832 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:36:23.714852 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:36:23.714862 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:36:23.714871 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:36:23.714882 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:36:23.714891 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:36:23.714901 | orchestrator | 2026-01-30 03:36:23.714912 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-30 03:36:23.714922 | orchestrator | Friday 30 January 2026 03:36:09 +0000 (0:00:01.037) 0:00:23.513 ******** 2026-01-30 03:36:23.714933 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:36:23.714942 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:36:23.714954 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:36:23.714963 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:36:23.714972 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:36:23.714982 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:36:23.714993 | orchestrator | 2026-01-30 03:36:23.715003 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-30 03:36:23.715013 | orchestrator | Friday 30 January 2026 03:36:17 +0000 (0:00:07.951) 0:00:31.464 ******** 2026-01-30 03:36:23.715023 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-30 03:36:23.715035 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-30 03:36:23.715046 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-30 03:36:23.715055 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-30 03:36:23.715066 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-30 03:36:23.715076 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-30 03:36:23.715086 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-30 03:36:23.715121 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-30 03:36:36.607394 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-30 03:36:36.607516 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-30 03:36:36.607536 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-30 03:36:36.607551 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-30 03:36:36.607568 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607583 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607663 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607673 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607682 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607691 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 03:36:36.607701 | orchestrator | 2026-01-30 03:36:36.607711 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-30 03:36:36.607722 | orchestrator | Friday 30 January 2026 03:36:23 +0000 (0:00:06.437) 0:00:37.901 ******** 2026-01-30 03:36:36.607732 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-30 03:36:36.607742 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:36:36.607752 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-30 03:36:36.607761 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:36:36.607770 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-30 03:36:36.607779 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:36:36.607788 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-30 03:36:36.607797 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-30 03:36:36.607806 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-30 03:36:36.607815 | orchestrator | 2026-01-30 03:36:36.607824 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-30 03:36:36.607833 | orchestrator | Friday 30 January 2026 03:36:26 +0000 (0:00:02.370) 0:00:40.271 ******** 2026-01-30 03:36:36.607842 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-30 03:36:36.607850 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:36:36.607859 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-30 03:36:36.607868 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:36:36.607877 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-30 03:36:36.607886 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:36:36.607895 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-30 03:36:36.607905 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-30 03:36:36.607933 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-30 03:36:36.607945 | orchestrator | 2026-01-30 03:36:36.607959 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-30 03:36:36.607972 | orchestrator | Friday 30 January 2026 03:36:29 +0000 (0:00:03.011) 0:00:43.283 ******** 2026-01-30 03:36:36.608056 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:36:36.608071 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:36:36.608107 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:36:36.608120 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:36:36.608132 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:36:36.608145 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:36:36.608157 | orchestrator | 2026-01-30 03:36:36.608170 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:36:36.608183 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 03:36:36.608198 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 03:36:36.608211 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 03:36:36.608223 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 03:36:36.608236 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 03:36:36.608248 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 03:36:36.608260 | orchestrator | 2026-01-30 03:36:36.608273 | orchestrator | 2026-01-30 03:36:36.608285 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:36:36.608298 | orchestrator | Friday 30 January 2026 03:36:36 +0000 (0:00:07.053) 0:00:50.336 ******** 2026-01-30 03:36:36.608331 | orchestrator | =============================================================================== 2026-01-30 03:36:36.608342 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.00s 2026-01-30 03:36:36.608354 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.59s 2026-01-30 03:36:36.608364 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.44s 2026-01-30 03:36:36.608375 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.01s 2026-01-30 03:36:36.608386 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.52s 2026-01-30 03:36:36.608397 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.37s 2026-01-30 03:36:36.608408 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.69s 2026-01-30 03:36:36.608419 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.41s 2026-01-30 03:36:36.608430 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.31s 2026-01-30 03:36:36.608442 | orchestrator | module-load : Load modules ---------------------------------------------- 1.30s 2026-01-30 03:36:36.608453 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.14s 2026-01-30 03:36:36.608464 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.04s 2026-01-30 03:36:36.608474 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.92s 2026-01-30 03:36:36.608485 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.90s 2026-01-30 03:36:36.608496 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.87s 2026-01-30 03:36:36.608507 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.69s 2026-01-30 03:36:36.608518 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2026-01-30 03:36:36.608529 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-01-30 03:36:38.797170 | orchestrator | 2026-01-30 03:36:38 | INFO  | Task 43333dfe-261a-4a68-8401-cf1ce62c719c (ovn) was prepared for execution. 2026-01-30 03:36:38.797269 | orchestrator | 2026-01-30 03:36:38 | INFO  | It takes a moment until task 43333dfe-261a-4a68-8401-cf1ce62c719c (ovn) has been started and output is visible here. 2026-01-30 03:36:47.398850 | orchestrator | 2026-01-30 03:36:47.398984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:36:47.399003 | orchestrator | 2026-01-30 03:36:47.399933 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:36:47.399964 | orchestrator | Friday 30 January 2026 03:36:42 +0000 (0:00:00.117) 0:00:00.117 ******** 2026-01-30 03:36:47.399983 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:36:47.400003 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:36:47.400022 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:36:47.400042 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:36:47.400063 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:36:47.400082 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:36:47.400103 | orchestrator | 2026-01-30 03:36:47.400125 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:36:47.400145 | orchestrator | Friday 30 January 2026 03:36:42 +0000 (0:00:00.481) 0:00:00.599 ******** 2026-01-30 03:36:47.400176 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-30 03:36:47.400193 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-30 03:36:47.400211 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-30 03:36:47.400228 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-30 03:36:47.400247 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-30 03:36:47.400266 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-30 03:36:47.400285 | orchestrator | 2026-01-30 03:36:47.400303 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-30 03:36:47.400323 | orchestrator | 2026-01-30 03:36:47.400341 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-30 03:36:47.400360 | orchestrator | Friday 30 January 2026 03:36:43 +0000 (0:00:00.633) 0:00:01.232 ******** 2026-01-30 03:36:47.400378 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:36:47.400399 | orchestrator | 2026-01-30 03:36:47.400418 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-30 03:36:47.400437 | orchestrator | Friday 30 January 2026 03:36:44 +0000 (0:00:00.755) 0:00:01.987 ******** 2026-01-30 03:36:47.400459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400657 | orchestrator | 2026-01-30 03:36:47.400668 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-30 03:36:47.400680 | orchestrator | Friday 30 January 2026 03:36:45 +0000 (0:00:00.913) 0:00:02.901 ******** 2026-01-30 03:36:47.400700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400720 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400820 | orchestrator | 2026-01-30 03:36:47.400836 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-30 03:36:47.400853 | orchestrator | Friday 30 January 2026 03:36:46 +0000 (0:00:01.360) 0:00:04.261 ******** 2026-01-30 03:36:47.400872 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:36:47.400924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711493 | orchestrator | 2026-01-30 03:37:11.711504 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-30 03:37:11.711514 | orchestrator | Friday 30 January 2026 03:36:47 +0000 (0:00:00.944) 0:00:05.205 ******** 2026-01-30 03:37:11.711524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711693 | orchestrator | 2026-01-30 03:37:11.711702 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-30 03:37:11.711712 | orchestrator | Friday 30 January 2026 03:36:48 +0000 (0:00:01.358) 0:00:06.564 ******** 2026-01-30 03:37:11.711727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711736 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711745 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:11.711789 | orchestrator | 2026-01-30 03:37:11.711798 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-30 03:37:11.711807 | orchestrator | Friday 30 January 2026 03:36:49 +0000 (0:00:01.158) 0:00:07.723 ******** 2026-01-30 03:37:11.711817 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:37:11.711828 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:37:11.711837 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:37:11.711845 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:37:11.711854 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:37:11.711863 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:37:11.711871 | orchestrator | 2026-01-30 03:37:11.711880 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-30 03:37:11.711889 | orchestrator | Friday 30 January 2026 03:36:52 +0000 (0:00:02.434) 0:00:10.157 ******** 2026-01-30 03:37:11.711898 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-30 03:37:11.711907 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-30 03:37:11.711916 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-30 03:37:11.711925 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-30 03:37:11.711933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-30 03:37:11.711942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-30 03:37:11.711957 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484425 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484648 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484704 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484715 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 03:37:48.484727 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484741 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484773 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484784 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484806 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-30 03:37:48.484818 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484830 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484841 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484864 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484875 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 03:37:48.484886 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484897 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484908 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484919 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484929 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484942 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 03:37:48.484955 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.484968 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.484981 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.484993 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.485005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.485016 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 03:37:48.485027 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 03:37:48.485038 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 03:37:48.485049 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 03:37:48.485060 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 03:37:48.485071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 03:37:48.485082 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-30 03:37:48.485094 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-30 03:37:48.485133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 03:37:48.485145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-30 03:37:48.485162 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-30 03:37:48.485173 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-30 03:37:48.485184 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 03:37:48.485195 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 03:37:48.485206 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-30 03:37:48.485216 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 03:37:48.485227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 03:37:48.485238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 03:37:48.485249 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 03:37:48.485260 | orchestrator | 2026-01-30 03:37:48.485272 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485283 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:18.907) 0:00:29.064 ******** 2026-01-30 03:37:48.485294 | orchestrator | 2026-01-30 03:37:48.485305 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485316 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.156) 0:00:29.221 ******** 2026-01-30 03:37:48.485327 | orchestrator | 2026-01-30 03:37:48.485338 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485349 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.056) 0:00:29.277 ******** 2026-01-30 03:37:48.485359 | orchestrator | 2026-01-30 03:37:48.485370 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485381 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.056) 0:00:29.333 ******** 2026-01-30 03:37:48.485392 | orchestrator | 2026-01-30 03:37:48.485403 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485414 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.055) 0:00:29.389 ******** 2026-01-30 03:37:48.485425 | orchestrator | 2026-01-30 03:37:48.485435 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 03:37:48.485446 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.057) 0:00:29.446 ******** 2026-01-30 03:37:48.485457 | orchestrator | 2026-01-30 03:37:48.485468 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-30 03:37:48.485479 | orchestrator | Friday 30 January 2026 03:37:11 +0000 (0:00:00.054) 0:00:29.501 ******** 2026-01-30 03:37:48.485490 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:37:48.485502 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:37:48.485513 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:48.485523 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:37:48.485534 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:48.485545 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:48.485583 | orchestrator | 2026-01-30 03:37:48.485603 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-30 03:37:48.485621 | orchestrator | Friday 30 January 2026 03:37:13 +0000 (0:00:01.485) 0:00:30.986 ******** 2026-01-30 03:37:48.485649 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:37:48.485668 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:37:48.485683 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:37:48.485700 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:37:48.485718 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:37:48.485736 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:37:48.485753 | orchestrator | 2026-01-30 03:37:48.485770 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-30 03:37:48.485786 | orchestrator | 2026-01-30 03:37:48.485806 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 03:37:48.485824 | orchestrator | Friday 30 January 2026 03:37:46 +0000 (0:00:33.505) 0:01:04.492 ******** 2026-01-30 03:37:48.485843 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:37:48.485862 | orchestrator | 2026-01-30 03:37:48.485881 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 03:37:48.485898 | orchestrator | Friday 30 January 2026 03:37:47 +0000 (0:00:00.533) 0:01:05.026 ******** 2026-01-30 03:37:48.485915 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:37:48.485927 | orchestrator | 2026-01-30 03:37:48.485938 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-30 03:37:48.485948 | orchestrator | Friday 30 January 2026 03:37:47 +0000 (0:00:00.440) 0:01:05.466 ******** 2026-01-30 03:37:48.485959 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:48.485970 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:48.485981 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:48.485992 | orchestrator | 2026-01-30 03:37:48.486003 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-30 03:37:48.486177 | orchestrator | Friday 30 January 2026 03:37:48 +0000 (0:00:00.816) 0:01:06.282 ******** 2026-01-30 03:37:57.468489 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.468646 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.468670 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.468688 | orchestrator | 2026-01-30 03:37:57.468707 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-30 03:37:57.468746 | orchestrator | Friday 30 January 2026 03:37:48 +0000 (0:00:00.281) 0:01:06.564 ******** 2026-01-30 03:37:57.468765 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.468782 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.468799 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.468815 | orchestrator | 2026-01-30 03:37:57.468831 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-30 03:37:57.468848 | orchestrator | Friday 30 January 2026 03:37:49 +0000 (0:00:00.255) 0:01:06.819 ******** 2026-01-30 03:37:57.468865 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.468882 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.468899 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.468916 | orchestrator | 2026-01-30 03:37:57.468933 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-30 03:37:57.468950 | orchestrator | Friday 30 January 2026 03:37:49 +0000 (0:00:00.278) 0:01:07.098 ******** 2026-01-30 03:37:57.468967 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.468984 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.469001 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.469017 | orchestrator | 2026-01-30 03:37:57.469034 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-30 03:37:57.469052 | orchestrator | Friday 30 January 2026 03:37:49 +0000 (0:00:00.270) 0:01:07.369 ******** 2026-01-30 03:37:57.469071 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469089 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469107 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469124 | orchestrator | 2026-01-30 03:37:57.469142 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-30 03:37:57.469188 | orchestrator | Friday 30 January 2026 03:37:49 +0000 (0:00:00.357) 0:01:07.726 ******** 2026-01-30 03:37:57.469207 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469224 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469240 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469255 | orchestrator | 2026-01-30 03:37:57.469271 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-30 03:37:57.469287 | orchestrator | Friday 30 January 2026 03:37:50 +0000 (0:00:00.243) 0:01:07.969 ******** 2026-01-30 03:37:57.469303 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469320 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469338 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469355 | orchestrator | 2026-01-30 03:37:57.469372 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-30 03:37:57.469389 | orchestrator | Friday 30 January 2026 03:37:50 +0000 (0:00:00.241) 0:01:08.211 ******** 2026-01-30 03:37:57.469405 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469422 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469439 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469457 | orchestrator | 2026-01-30 03:37:57.469476 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-30 03:37:57.469494 | orchestrator | Friday 30 January 2026 03:37:50 +0000 (0:00:00.239) 0:01:08.450 ******** 2026-01-30 03:37:57.469511 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469530 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469570 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469586 | orchestrator | 2026-01-30 03:37:57.469601 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-30 03:37:57.469617 | orchestrator | Friday 30 January 2026 03:37:50 +0000 (0:00:00.354) 0:01:08.805 ******** 2026-01-30 03:37:57.469633 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469649 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469665 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469680 | orchestrator | 2026-01-30 03:37:57.469696 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-30 03:37:57.469713 | orchestrator | Friday 30 January 2026 03:37:51 +0000 (0:00:00.240) 0:01:09.045 ******** 2026-01-30 03:37:57.469729 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469746 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469762 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469777 | orchestrator | 2026-01-30 03:37:57.469794 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-30 03:37:57.469811 | orchestrator | Friday 30 January 2026 03:37:51 +0000 (0:00:00.230) 0:01:09.275 ******** 2026-01-30 03:37:57.469827 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469843 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469861 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469877 | orchestrator | 2026-01-30 03:37:57.469894 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-30 03:37:57.469911 | orchestrator | Friday 30 January 2026 03:37:51 +0000 (0:00:00.253) 0:01:09.529 ******** 2026-01-30 03:37:57.469928 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.469944 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.469961 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.469978 | orchestrator | 2026-01-30 03:37:57.469995 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-30 03:37:57.470012 | orchestrator | Friday 30 January 2026 03:37:52 +0000 (0:00:00.365) 0:01:09.895 ******** 2026-01-30 03:37:57.470105 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470122 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470139 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470156 | orchestrator | 2026-01-30 03:37:57.470171 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-30 03:37:57.470199 | orchestrator | Friday 30 January 2026 03:37:52 +0000 (0:00:00.251) 0:01:10.146 ******** 2026-01-30 03:37:57.470213 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470226 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470239 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470251 | orchestrator | 2026-01-30 03:37:57.470265 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-30 03:37:57.470278 | orchestrator | Friday 30 January 2026 03:37:52 +0000 (0:00:00.267) 0:01:10.414 ******** 2026-01-30 03:37:57.470316 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470330 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470344 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470357 | orchestrator | 2026-01-30 03:37:57.470371 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 03:37:57.470393 | orchestrator | Friday 30 January 2026 03:37:52 +0000 (0:00:00.254) 0:01:10.668 ******** 2026-01-30 03:37:57.470408 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:37:57.470422 | orchestrator | 2026-01-30 03:37:57.470436 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-30 03:37:57.470450 | orchestrator | Friday 30 January 2026 03:37:53 +0000 (0:00:00.576) 0:01:11.245 ******** 2026-01-30 03:37:57.470462 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.470476 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.470490 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.470503 | orchestrator | 2026-01-30 03:37:57.470515 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-30 03:37:57.470528 | orchestrator | Friday 30 January 2026 03:37:53 +0000 (0:00:00.370) 0:01:11.615 ******** 2026-01-30 03:37:57.470541 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:37:57.470583 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:37:57.470596 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:37:57.470609 | orchestrator | 2026-01-30 03:37:57.470622 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-30 03:37:57.470635 | orchestrator | Friday 30 January 2026 03:37:54 +0000 (0:00:00.356) 0:01:11.972 ******** 2026-01-30 03:37:57.470648 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470662 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470675 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470688 | orchestrator | 2026-01-30 03:37:57.470700 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-30 03:37:57.470714 | orchestrator | Friday 30 January 2026 03:37:54 +0000 (0:00:00.279) 0:01:12.252 ******** 2026-01-30 03:37:57.470726 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470739 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470753 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470791 | orchestrator | 2026-01-30 03:37:57.470804 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-30 03:37:57.470818 | orchestrator | Friday 30 January 2026 03:37:54 +0000 (0:00:00.392) 0:01:12.645 ******** 2026-01-30 03:37:57.470831 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470845 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470858 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470871 | orchestrator | 2026-01-30 03:37:57.470884 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-30 03:37:57.470897 | orchestrator | Friday 30 January 2026 03:37:55 +0000 (0:00:00.288) 0:01:12.933 ******** 2026-01-30 03:37:57.470911 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.470924 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.470938 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.470951 | orchestrator | 2026-01-30 03:37:57.470965 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-30 03:37:57.470978 | orchestrator | Friday 30 January 2026 03:37:55 +0000 (0:00:00.293) 0:01:13.227 ******** 2026-01-30 03:37:57.471006 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.471019 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.471033 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.471045 | orchestrator | 2026-01-30 03:37:57.471059 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-30 03:37:57.471072 | orchestrator | Friday 30 January 2026 03:37:55 +0000 (0:00:00.286) 0:01:13.514 ******** 2026-01-30 03:37:57.471085 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:37:57.471098 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:37:57.471111 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:37:57.471124 | orchestrator | 2026-01-30 03:37:57.471137 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-30 03:37:57.471150 | orchestrator | Friday 30 January 2026 03:37:56 +0000 (0:00:00.408) 0:01:13.922 ******** 2026-01-30 03:37:57.471167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:57.471183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:57.471197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:37:57.471232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471702 | orchestrator | 2026-01-30 03:38:03.471716 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-30 03:38:03.471729 | orchestrator | Friday 30 January 2026 03:37:57 +0000 (0:00:01.349) 0:01:15.271 ******** 2026-01-30 03:38:03.471742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471937 | orchestrator | 2026-01-30 03:38:03.471949 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-30 03:38:03.471963 | orchestrator | Friday 30 January 2026 03:38:01 +0000 (0:00:03.651) 0:01:18.923 ******** 2026-01-30 03:38:03.471977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.471991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.472005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.472019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.472033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:03.472061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.214738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.214873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.214889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.214902 | orchestrator | 2026-01-30 03:38:17.214915 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:17.214927 | orchestrator | Friday 30 January 2026 03:38:03 +0000 (0:00:01.999) 0:01:20.923 ******** 2026-01-30 03:38:17.214938 | orchestrator | 2026-01-30 03:38:17.214949 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:17.214960 | orchestrator | Friday 30 January 2026 03:38:03 +0000 (0:00:00.061) 0:01:20.984 ******** 2026-01-30 03:38:17.214971 | orchestrator | 2026-01-30 03:38:17.214982 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:17.214992 | orchestrator | Friday 30 January 2026 03:38:03 +0000 (0:00:00.064) 0:01:21.049 ******** 2026-01-30 03:38:17.215003 | orchestrator | 2026-01-30 03:38:17.215014 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-30 03:38:17.215025 | orchestrator | Friday 30 January 2026 03:38:03 +0000 (0:00:00.219) 0:01:21.269 ******** 2026-01-30 03:38:17.215036 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:17.215049 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:17.215060 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:17.215071 | orchestrator | 2026-01-30 03:38:17.215082 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-30 03:38:17.215093 | orchestrator | Friday 30 January 2026 03:38:05 +0000 (0:00:02.455) 0:01:23.724 ******** 2026-01-30 03:38:17.215104 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:17.215114 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:17.215125 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:17.215136 | orchestrator | 2026-01-30 03:38:17.215147 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-30 03:38:17.215158 | orchestrator | Friday 30 January 2026 03:38:08 +0000 (0:00:02.427) 0:01:26.152 ******** 2026-01-30 03:38:17.215169 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:17.215180 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:17.215191 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:17.215201 | orchestrator | 2026-01-30 03:38:17.215212 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-30 03:38:17.215223 | orchestrator | Friday 30 January 2026 03:38:10 +0000 (0:00:02.437) 0:01:28.590 ******** 2026-01-30 03:38:17.215234 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:38:17.215245 | orchestrator | 2026-01-30 03:38:17.215256 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-30 03:38:17.215267 | orchestrator | Friday 30 January 2026 03:38:10 +0000 (0:00:00.121) 0:01:28.711 ******** 2026-01-30 03:38:17.215278 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:17.215289 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:17.215300 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:17.215311 | orchestrator | 2026-01-30 03:38:17.215323 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-30 03:38:17.215333 | orchestrator | Friday 30 January 2026 03:38:11 +0000 (0:00:00.921) 0:01:29.632 ******** 2026-01-30 03:38:17.215344 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:38:17.215362 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:38:17.215373 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:17.215384 | orchestrator | 2026-01-30 03:38:17.215394 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-30 03:38:17.215405 | orchestrator | Friday 30 January 2026 03:38:12 +0000 (0:00:00.685) 0:01:30.318 ******** 2026-01-30 03:38:17.215416 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:17.215427 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:17.215438 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:17.215448 | orchestrator | 2026-01-30 03:38:17.215459 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-30 03:38:17.215484 | orchestrator | Friday 30 January 2026 03:38:13 +0000 (0:00:00.744) 0:01:31.062 ******** 2026-01-30 03:38:17.215496 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:38:17.215506 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:38:17.215517 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:17.215528 | orchestrator | 2026-01-30 03:38:17.215602 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-30 03:38:17.215622 | orchestrator | Friday 30 January 2026 03:38:13 +0000 (0:00:00.663) 0:01:31.725 ******** 2026-01-30 03:38:17.215636 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:17.215647 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:17.215677 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:17.215689 | orchestrator | 2026-01-30 03:38:17.215700 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-30 03:38:17.215711 | orchestrator | Friday 30 January 2026 03:38:14 +0000 (0:00:00.708) 0:01:32.434 ******** 2026-01-30 03:38:17.215722 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:17.215733 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:17.215744 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:17.215754 | orchestrator | 2026-01-30 03:38:17.215766 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-30 03:38:17.215777 | orchestrator | Friday 30 January 2026 03:38:15 +0000 (0:00:00.944) 0:01:33.378 ******** 2026-01-30 03:38:17.215788 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:17.215799 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:17.215810 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:17.215820 | orchestrator | 2026-01-30 03:38:17.215831 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-30 03:38:17.215842 | orchestrator | Friday 30 January 2026 03:38:15 +0000 (0:00:00.267) 0:01:33.646 ******** 2026-01-30 03:38:17.215855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215869 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215892 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215923 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215952 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:17.215972 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117463 | orchestrator | 2026-01-30 03:38:24.117610 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-30 03:38:24.117625 | orchestrator | Friday 30 January 2026 03:38:17 +0000 (0:00:01.364) 0:01:35.011 ******** 2026-01-30 03:38:24.117637 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117649 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117667 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117718 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117759 | orchestrator | 2026-01-30 03:38:24.117768 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-30 03:38:24.117777 | orchestrator | Friday 30 January 2026 03:38:20 +0000 (0:00:03.719) 0:01:38.730 ******** 2026-01-30 03:38:24.117801 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117811 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117871 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 03:38:24.117893 | orchestrator | 2026-01-30 03:38:24.117902 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:24.117911 | orchestrator | Friday 30 January 2026 03:38:23 +0000 (0:00:02.993) 0:01:41.723 ******** 2026-01-30 03:38:24.117920 | orchestrator | 2026-01-30 03:38:24.117929 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:24.117938 | orchestrator | Friday 30 January 2026 03:38:23 +0000 (0:00:00.058) 0:01:41.782 ******** 2026-01-30 03:38:24.117946 | orchestrator | 2026-01-30 03:38:24.117955 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 03:38:24.117964 | orchestrator | Friday 30 January 2026 03:38:24 +0000 (0:00:00.061) 0:01:41.843 ******** 2026-01-30 03:38:24.117972 | orchestrator | 2026-01-30 03:38:24.117994 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-30 03:38:47.653797 | orchestrator | Friday 30 January 2026 03:38:24 +0000 (0:00:00.062) 0:01:41.905 ******** 2026-01-30 03:38:47.653886 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:47.653897 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:47.653904 | orchestrator | 2026-01-30 03:38:47.653910 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-30 03:38:47.653917 | orchestrator | Friday 30 January 2026 03:38:30 +0000 (0:00:06.123) 0:01:48.029 ******** 2026-01-30 03:38:47.653923 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:47.653930 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:47.653936 | orchestrator | 2026-01-30 03:38:47.653942 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-30 03:38:47.653970 | orchestrator | Friday 30 January 2026 03:38:36 +0000 (0:00:06.149) 0:01:54.179 ******** 2026-01-30 03:38:47.653976 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:38:47.653982 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:38:47.653988 | orchestrator | 2026-01-30 03:38:47.653994 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-30 03:38:47.654000 | orchestrator | Friday 30 January 2026 03:38:42 +0000 (0:00:06.236) 0:02:00.415 ******** 2026-01-30 03:38:47.654006 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:38:47.654055 | orchestrator | 2026-01-30 03:38:47.654063 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-30 03:38:47.654069 | orchestrator | Friday 30 January 2026 03:38:42 +0000 (0:00:00.131) 0:02:00.547 ******** 2026-01-30 03:38:47.654075 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:47.654082 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:47.654087 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:47.654093 | orchestrator | 2026-01-30 03:38:47.654099 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-30 03:38:47.654106 | orchestrator | Friday 30 January 2026 03:38:43 +0000 (0:00:00.943) 0:02:01.491 ******** 2026-01-30 03:38:47.654112 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:38:47.654118 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:38:47.654124 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:47.654130 | orchestrator | 2026-01-30 03:38:47.654136 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-30 03:38:47.654142 | orchestrator | Friday 30 January 2026 03:38:44 +0000 (0:00:00.644) 0:02:02.135 ******** 2026-01-30 03:38:47.654148 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:47.654154 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:47.654160 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:47.654165 | orchestrator | 2026-01-30 03:38:47.654171 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-30 03:38:47.654177 | orchestrator | Friday 30 January 2026 03:38:45 +0000 (0:00:00.768) 0:02:02.904 ******** 2026-01-30 03:38:47.654183 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:38:47.654189 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:38:47.654195 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:38:47.654201 | orchestrator | 2026-01-30 03:38:47.654207 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-30 03:38:47.654213 | orchestrator | Friday 30 January 2026 03:38:45 +0000 (0:00:00.621) 0:02:03.525 ******** 2026-01-30 03:38:47.654219 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:47.654224 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:47.654230 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:47.654236 | orchestrator | 2026-01-30 03:38:47.654242 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-30 03:38:47.654248 | orchestrator | Friday 30 January 2026 03:38:46 +0000 (0:00:00.922) 0:02:04.448 ******** 2026-01-30 03:38:47.654254 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:38:47.654260 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:38:47.654265 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:38:47.654271 | orchestrator | 2026-01-30 03:38:47.654277 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:38:47.654284 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-30 03:38:47.654292 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-30 03:38:47.654298 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-30 03:38:47.654304 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:38:47.654316 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:38:47.654322 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:38:47.654328 | orchestrator | 2026-01-30 03:38:47.654334 | orchestrator | 2026-01-30 03:38:47.654350 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:38:47.654356 | orchestrator | Friday 30 January 2026 03:38:47 +0000 (0:00:00.788) 0:02:05.237 ******** 2026-01-30 03:38:47.654364 | orchestrator | =============================================================================== 2026-01-30 03:38:47.654371 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.51s 2026-01-30 03:38:47.654377 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.91s 2026-01-30 03:38:47.654384 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.67s 2026-01-30 03:38:47.654391 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.58s 2026-01-30 03:38:47.654397 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.58s 2026-01-30 03:38:47.654417 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.72s 2026-01-30 03:38:47.654423 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.65s 2026-01-30 03:38:47.654430 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.99s 2026-01-30 03:38:47.654437 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.43s 2026-01-30 03:38:47.654444 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.00s 2026-01-30 03:38:47.654451 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.49s 2026-01-30 03:38:47.654458 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2026-01-30 03:38:47.654464 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.36s 2026-01-30 03:38:47.654471 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.36s 2026-01-30 03:38:47.654478 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2026-01-30 03:38:47.654485 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.16s 2026-01-30 03:38:47.654492 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 0.94s 2026-01-30 03:38:47.654499 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 0.94s 2026-01-30 03:38:47.654505 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 0.94s 2026-01-30 03:38:47.654512 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 0.92s 2026-01-30 03:38:47.835581 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 03:38:47.835673 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-01-30 03:38:49.705077 | orchestrator | 2026-01-30 03:38:49 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-30 03:38:59.893474 | orchestrator | 2026-01-30 03:38:59 | INFO  | Task 5f41baa2-04a6-4609-a4e7-367b911dcc33 (wipe-partitions) was prepared for execution. 2026-01-30 03:38:59.893616 | orchestrator | 2026-01-30 03:38:59 | INFO  | It takes a moment until task 5f41baa2-04a6-4609-a4e7-367b911dcc33 (wipe-partitions) has been started and output is visible here. 2026-01-30 03:39:11.951944 | orchestrator | 2026-01-30 03:39:11.952078 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-30 03:39:11.952106 | orchestrator | 2026-01-30 03:39:11.952125 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-30 03:39:11.952143 | orchestrator | Friday 30 January 2026 03:39:03 +0000 (0:00:00.120) 0:00:00.120 ******** 2026-01-30 03:39:11.952200 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:39:11.952224 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:39:11.952242 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:39:11.952258 | orchestrator | 2026-01-30 03:39:11.952276 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-30 03:39:11.952294 | orchestrator | Friday 30 January 2026 03:39:04 +0000 (0:00:00.565) 0:00:00.686 ******** 2026-01-30 03:39:11.952311 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:11.952328 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:39:11.952344 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:39:11.952361 | orchestrator | 2026-01-30 03:39:11.952378 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-30 03:39:11.952395 | orchestrator | Friday 30 January 2026 03:39:04 +0000 (0:00:00.347) 0:00:01.033 ******** 2026-01-30 03:39:11.952413 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:39:11.952430 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:11.952447 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:39:11.952464 | orchestrator | 2026-01-30 03:39:11.952483 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-30 03:39:11.952501 | orchestrator | Friday 30 January 2026 03:39:05 +0000 (0:00:00.580) 0:00:01.614 ******** 2026-01-30 03:39:11.952548 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:11.952566 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:39:11.952586 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:39:11.952605 | orchestrator | 2026-01-30 03:39:11.952623 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-30 03:39:11.952642 | orchestrator | Friday 30 January 2026 03:39:05 +0000 (0:00:00.285) 0:00:01.899 ******** 2026-01-30 03:39:11.952660 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-30 03:39:11.952678 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-30 03:39:11.952697 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-30 03:39:11.952715 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-30 03:39:11.952731 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-30 03:39:11.952748 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-30 03:39:11.952784 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-30 03:39:11.952804 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-30 03:39:11.952820 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-30 03:39:11.952837 | orchestrator | 2026-01-30 03:39:11.952855 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-30 03:39:11.952873 | orchestrator | Friday 30 January 2026 03:39:06 +0000 (0:00:01.209) 0:00:03.108 ******** 2026-01-30 03:39:11.952889 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-30 03:39:11.952907 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-30 03:39:11.952925 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-30 03:39:11.952942 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-30 03:39:11.952958 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-30 03:39:11.952975 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-30 03:39:11.952993 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-30 03:39:11.953010 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-30 03:39:11.953026 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-30 03:39:11.953043 | orchestrator | 2026-01-30 03:39:11.953058 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-30 03:39:11.953074 | orchestrator | Friday 30 January 2026 03:39:08 +0000 (0:00:01.481) 0:00:04.590 ******** 2026-01-30 03:39:11.953089 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-30 03:39:11.953105 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-30 03:39:11.953120 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-30 03:39:11.953135 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-30 03:39:11.953165 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-30 03:39:11.953183 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-30 03:39:11.953199 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-30 03:39:11.953214 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-30 03:39:11.953230 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-30 03:39:11.953246 | orchestrator | 2026-01-30 03:39:11.953262 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-30 03:39:11.953279 | orchestrator | Friday 30 January 2026 03:39:10 +0000 (0:00:02.126) 0:00:06.717 ******** 2026-01-30 03:39:11.953294 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:39:11.953311 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:39:11.953320 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:39:11.953330 | orchestrator | 2026-01-30 03:39:11.953340 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-30 03:39:11.953350 | orchestrator | Friday 30 January 2026 03:39:11 +0000 (0:00:00.609) 0:00:07.326 ******** 2026-01-30 03:39:11.953359 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:39:11.953369 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:39:11.953379 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:39:11.953388 | orchestrator | 2026-01-30 03:39:11.953398 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:39:11.953410 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:11.953421 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:11.953453 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:11.953463 | orchestrator | 2026-01-30 03:39:11.953473 | orchestrator | 2026-01-30 03:39:11.953483 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:39:11.953493 | orchestrator | Friday 30 January 2026 03:39:11 +0000 (0:00:00.630) 0:00:07.957 ******** 2026-01-30 03:39:11.953535 | orchestrator | =============================================================================== 2026-01-30 03:39:11.953559 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.13s 2026-01-30 03:39:11.953580 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.48s 2026-01-30 03:39:11.953596 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2026-01-30 03:39:11.953611 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-01-30 03:39:11.953626 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-01-30 03:39:11.953642 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-01-30 03:39:11.953657 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2026-01-30 03:39:11.953673 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-01-30 03:39:11.953687 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2026-01-30 03:39:24.168266 | orchestrator | 2026-01-30 03:39:24 | INFO  | Task 1acc38f6-1a90-4f03-8f10-af8ddb556c73 (facts) was prepared for execution. 2026-01-30 03:39:24.168374 | orchestrator | 2026-01-30 03:39:24 | INFO  | It takes a moment until task 1acc38f6-1a90-4f03-8f10-af8ddb556c73 (facts) has been started and output is visible here. 2026-01-30 03:39:35.984399 | orchestrator | 2026-01-30 03:39:35.984603 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-30 03:39:35.984648 | orchestrator | 2026-01-30 03:39:35.984665 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-30 03:39:35.984682 | orchestrator | Friday 30 January 2026 03:39:27 +0000 (0:00:00.190) 0:00:00.190 ******** 2026-01-30 03:39:35.984735 | orchestrator | ok: [testbed-manager] 2026-01-30 03:39:35.984754 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:39:35.984770 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:39:35.984785 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:39:35.984801 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:35.984817 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:39:35.984833 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:39:35.984849 | orchestrator | 2026-01-30 03:39:35.984864 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-30 03:39:35.984881 | orchestrator | Friday 30 January 2026 03:39:28 +0000 (0:00:00.882) 0:00:01.073 ******** 2026-01-30 03:39:35.984898 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:39:35.984915 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:39:35.984932 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:39:35.984947 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:39:35.984963 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:35.984979 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:39:35.984994 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:39:35.985009 | orchestrator | 2026-01-30 03:39:35.985025 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 03:39:35.985042 | orchestrator | 2026-01-30 03:39:35.985059 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 03:39:35.985075 | orchestrator | Friday 30 January 2026 03:39:29 +0000 (0:00:01.043) 0:00:02.116 ******** 2026-01-30 03:39:35.985092 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:39:35.985107 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:39:35.985124 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:39:35.985140 | orchestrator | ok: [testbed-manager] 2026-01-30 03:39:35.985157 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:35.985173 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:39:35.985189 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:39:35.985204 | orchestrator | 2026-01-30 03:39:35.985218 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-30 03:39:35.985233 | orchestrator | 2026-01-30 03:39:35.985248 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-30 03:39:35.985263 | orchestrator | Friday 30 January 2026 03:39:35 +0000 (0:00:05.170) 0:00:07.287 ******** 2026-01-30 03:39:35.985280 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:39:35.985296 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:39:35.985313 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:39:35.985327 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:39:35.985343 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:35.985358 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:39:35.985374 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:39:35.985389 | orchestrator | 2026-01-30 03:39:35.985406 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:39:35.985422 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985539 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985563 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985580 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985596 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985612 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985644 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:39:35.985660 | orchestrator | 2026-01-30 03:39:35.985677 | orchestrator | 2026-01-30 03:39:35.985694 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:39:35.985708 | orchestrator | Friday 30 January 2026 03:39:35 +0000 (0:00:00.544) 0:00:07.831 ******** 2026-01-30 03:39:35.985725 | orchestrator | =============================================================================== 2026-01-30 03:39:35.985742 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.17s 2026-01-30 03:39:35.985759 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2026-01-30 03:39:35.985774 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.88s 2026-01-30 03:39:35.985790 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2026-01-30 03:39:38.221030 | orchestrator | 2026-01-30 03:39:38 | INFO  | Task 7370775e-cc9b-4043-b913-149e612f6088 (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-30 03:39:38.221130 | orchestrator | 2026-01-30 03:39:38 | INFO  | It takes a moment until task 7370775e-cc9b-4043-b913-149e612f6088 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-30 03:39:48.591288 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 03:39:48.591382 | orchestrator | 2.16.14 2026-01-30 03:39:48.591395 | orchestrator | 2026-01-30 03:39:48.591405 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-30 03:39:48.591414 | orchestrator | 2026-01-30 03:39:48.591422 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:39:48.591431 | orchestrator | Friday 30 January 2026 03:39:42 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-01-30 03:39:48.591440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 03:39:48.591448 | orchestrator | 2026-01-30 03:39:48.591471 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:39:48.591479 | orchestrator | Friday 30 January 2026 03:39:42 +0000 (0:00:00.214) 0:00:00.484 ******** 2026-01-30 03:39:48.591487 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:48.591540 | orchestrator | 2026-01-30 03:39:48.591548 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591556 | orchestrator | Friday 30 January 2026 03:39:42 +0000 (0:00:00.189) 0:00:00.673 ******** 2026-01-30 03:39:48.591564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-30 03:39:48.591572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-30 03:39:48.591579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-30 03:39:48.591587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-30 03:39:48.591595 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-30 03:39:48.591603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-30 03:39:48.591610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-30 03:39:48.591618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-30 03:39:48.591626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-30 03:39:48.591633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-30 03:39:48.591641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-30 03:39:48.591649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-30 03:39:48.591678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-30 03:39:48.591686 | orchestrator | 2026-01-30 03:39:48.591694 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591702 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.347) 0:00:01.021 ******** 2026-01-30 03:39:48.591710 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591719 | orchestrator | 2026-01-30 03:39:48.591727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591734 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.185) 0:00:01.207 ******** 2026-01-30 03:39:48.591742 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591750 | orchestrator | 2026-01-30 03:39:48.591758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591766 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.174) 0:00:01.382 ******** 2026-01-30 03:39:48.591774 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591781 | orchestrator | 2026-01-30 03:39:48.591789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591797 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.175) 0:00:01.557 ******** 2026-01-30 03:39:48.591805 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591813 | orchestrator | 2026-01-30 03:39:48.591821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591829 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.179) 0:00:01.737 ******** 2026-01-30 03:39:48.591837 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591846 | orchestrator | 2026-01-30 03:39:48.591856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591865 | orchestrator | Friday 30 January 2026 03:39:43 +0000 (0:00:00.189) 0:00:01.926 ******** 2026-01-30 03:39:48.591874 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591882 | orchestrator | 2026-01-30 03:39:48.591892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591901 | orchestrator | Friday 30 January 2026 03:39:44 +0000 (0:00:00.174) 0:00:02.101 ******** 2026-01-30 03:39:48.591910 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591919 | orchestrator | 2026-01-30 03:39:48.591927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591936 | orchestrator | Friday 30 January 2026 03:39:44 +0000 (0:00:00.169) 0:00:02.271 ******** 2026-01-30 03:39:48.591945 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.591954 | orchestrator | 2026-01-30 03:39:48.591963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.591971 | orchestrator | Friday 30 January 2026 03:39:44 +0000 (0:00:00.188) 0:00:02.459 ******** 2026-01-30 03:39:48.591981 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a) 2026-01-30 03:39:48.591991 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a) 2026-01-30 03:39:48.592000 | orchestrator | 2026-01-30 03:39:48.592010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.592034 | orchestrator | Friday 30 January 2026 03:39:44 +0000 (0:00:00.379) 0:00:02.839 ******** 2026-01-30 03:39:48.592044 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e) 2026-01-30 03:39:48.592054 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e) 2026-01-30 03:39:48.592063 | orchestrator | 2026-01-30 03:39:48.592072 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.592081 | orchestrator | Friday 30 January 2026 03:39:45 +0000 (0:00:00.519) 0:00:03.358 ******** 2026-01-30 03:39:48.592095 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c) 2026-01-30 03:39:48.592110 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c) 2026-01-30 03:39:48.592119 | orchestrator | 2026-01-30 03:39:48.592129 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.592138 | orchestrator | Friday 30 January 2026 03:39:45 +0000 (0:00:00.546) 0:00:03.905 ******** 2026-01-30 03:39:48.592147 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db) 2026-01-30 03:39:48.592156 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db) 2026-01-30 03:39:48.592165 | orchestrator | 2026-01-30 03:39:48.592174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:39:48.592183 | orchestrator | Friday 30 January 2026 03:39:46 +0000 (0:00:00.631) 0:00:04.537 ******** 2026-01-30 03:39:48.592192 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:39:48.592201 | orchestrator | 2026-01-30 03:39:48.592209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592217 | orchestrator | Friday 30 January 2026 03:39:46 +0000 (0:00:00.286) 0:00:04.824 ******** 2026-01-30 03:39:48.592225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-30 03:39:48.592233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-30 03:39:48.592241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-30 03:39:48.592249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-30 03:39:48.592257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-30 03:39:48.592266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-30 03:39:48.592273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-30 03:39:48.592281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-30 03:39:48.592289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-30 03:39:48.592297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-30 03:39:48.592305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-30 03:39:48.592312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-30 03:39:48.592320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-30 03:39:48.592328 | orchestrator | 2026-01-30 03:39:48.592336 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592344 | orchestrator | Friday 30 January 2026 03:39:47 +0000 (0:00:00.330) 0:00:05.154 ******** 2026-01-30 03:39:48.592352 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592360 | orchestrator | 2026-01-30 03:39:48.592368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592376 | orchestrator | Friday 30 January 2026 03:39:47 +0000 (0:00:00.197) 0:00:05.352 ******** 2026-01-30 03:39:48.592384 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592392 | orchestrator | 2026-01-30 03:39:48.592400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592408 | orchestrator | Friday 30 January 2026 03:39:47 +0000 (0:00:00.207) 0:00:05.560 ******** 2026-01-30 03:39:48.592416 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592424 | orchestrator | 2026-01-30 03:39:48.592432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592440 | orchestrator | Friday 30 January 2026 03:39:47 +0000 (0:00:00.195) 0:00:05.755 ******** 2026-01-30 03:39:48.592452 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592461 | orchestrator | 2026-01-30 03:39:48.592469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592477 | orchestrator | Friday 30 January 2026 03:39:47 +0000 (0:00:00.188) 0:00:05.944 ******** 2026-01-30 03:39:48.592485 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592517 | orchestrator | 2026-01-30 03:39:48.592525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592533 | orchestrator | Friday 30 January 2026 03:39:48 +0000 (0:00:00.199) 0:00:06.143 ******** 2026-01-30 03:39:48.592541 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592549 | orchestrator | 2026-01-30 03:39:48.592557 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:48.592565 | orchestrator | Friday 30 January 2026 03:39:48 +0000 (0:00:00.196) 0:00:06.339 ******** 2026-01-30 03:39:48.592573 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:48.592581 | orchestrator | 2026-01-30 03:39:48.592593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025572 | orchestrator | Friday 30 January 2026 03:39:48 +0000 (0:00:00.194) 0:00:06.533 ******** 2026-01-30 03:39:56.025686 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.025704 | orchestrator | 2026-01-30 03:39:56.025717 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025729 | orchestrator | Friday 30 January 2026 03:39:48 +0000 (0:00:00.190) 0:00:06.724 ******** 2026-01-30 03:39:56.025740 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-30 03:39:56.025752 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-30 03:39:56.025764 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-30 03:39:56.025791 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-30 03:39:56.025803 | orchestrator | 2026-01-30 03:39:56.025814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025825 | orchestrator | Friday 30 January 2026 03:39:49 +0000 (0:00:00.968) 0:00:07.692 ******** 2026-01-30 03:39:56.025836 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.025848 | orchestrator | 2026-01-30 03:39:56.025859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025870 | orchestrator | Friday 30 January 2026 03:39:49 +0000 (0:00:00.222) 0:00:07.914 ******** 2026-01-30 03:39:56.025881 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.025892 | orchestrator | 2026-01-30 03:39:56.025903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025914 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.191) 0:00:08.106 ******** 2026-01-30 03:39:56.025925 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.025936 | orchestrator | 2026-01-30 03:39:56.025948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:39:56.025959 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.209) 0:00:08.315 ******** 2026-01-30 03:39:56.025997 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026008 | orchestrator | 2026-01-30 03:39:56.026081 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-30 03:39:56.026095 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.206) 0:00:08.521 ******** 2026-01-30 03:39:56.026108 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-30 03:39:56.026121 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-30 03:39:56.026134 | orchestrator | 2026-01-30 03:39:56.026147 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-30 03:39:56.026160 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.170) 0:00:08.692 ******** 2026-01-30 03:39:56.026173 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026185 | orchestrator | 2026-01-30 03:39:56.026198 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-30 03:39:56.026210 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.119) 0:00:08.812 ******** 2026-01-30 03:39:56.026247 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026260 | orchestrator | 2026-01-30 03:39:56.026273 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-30 03:39:56.026285 | orchestrator | Friday 30 January 2026 03:39:50 +0000 (0:00:00.135) 0:00:08.947 ******** 2026-01-30 03:39:56.026298 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026310 | orchestrator | 2026-01-30 03:39:56.026323 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-30 03:39:56.026336 | orchestrator | Friday 30 January 2026 03:39:51 +0000 (0:00:00.138) 0:00:09.085 ******** 2026-01-30 03:39:56.026349 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:56.026362 | orchestrator | 2026-01-30 03:39:56.026375 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-30 03:39:56.026389 | orchestrator | Friday 30 January 2026 03:39:51 +0000 (0:00:00.141) 0:00:09.227 ******** 2026-01-30 03:39:56.026403 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}}) 2026-01-30 03:39:56.026417 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}}) 2026-01-30 03:39:56.026430 | orchestrator | 2026-01-30 03:39:56.026441 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-30 03:39:56.026453 | orchestrator | Friday 30 January 2026 03:39:51 +0000 (0:00:00.146) 0:00:09.374 ******** 2026-01-30 03:39:56.026464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}})  2026-01-30 03:39:56.026478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}})  2026-01-30 03:39:56.026527 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026540 | orchestrator | 2026-01-30 03:39:56.026551 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-30 03:39:56.026562 | orchestrator | Friday 30 January 2026 03:39:51 +0000 (0:00:00.328) 0:00:09.702 ******** 2026-01-30 03:39:56.026573 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}})  2026-01-30 03:39:56.026585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}})  2026-01-30 03:39:56.026596 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026607 | orchestrator | 2026-01-30 03:39:56.026617 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-30 03:39:56.026632 | orchestrator | Friday 30 January 2026 03:39:51 +0000 (0:00:00.153) 0:00:09.856 ******** 2026-01-30 03:39:56.026649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}})  2026-01-30 03:39:56.026690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}})  2026-01-30 03:39:56.026710 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026728 | orchestrator | 2026-01-30 03:39:56.026751 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-30 03:39:56.026778 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.146) 0:00:10.002 ******** 2026-01-30 03:39:56.026796 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:56.026813 | orchestrator | 2026-01-30 03:39:56.026831 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-30 03:39:56.026862 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.136) 0:00:10.139 ******** 2026-01-30 03:39:56.026882 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:39:56.026901 | orchestrator | 2026-01-30 03:39:56.026914 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-30 03:39:56.026925 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.150) 0:00:10.289 ******** 2026-01-30 03:39:56.026948 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.026959 | orchestrator | 2026-01-30 03:39:56.026993 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-30 03:39:56.027004 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.162) 0:00:10.452 ******** 2026-01-30 03:39:56.027015 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.027026 | orchestrator | 2026-01-30 03:39:56.027037 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-30 03:39:56.027048 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.132) 0:00:10.585 ******** 2026-01-30 03:39:56.027058 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.027069 | orchestrator | 2026-01-30 03:39:56.027080 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-30 03:39:56.027091 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.136) 0:00:10.722 ******** 2026-01-30 03:39:56.027102 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:39:56.027112 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:39:56.027124 | orchestrator |  "sdb": { 2026-01-30 03:39:56.027136 | orchestrator |  "osd_lvm_uuid": "8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0" 2026-01-30 03:39:56.027147 | orchestrator |  }, 2026-01-30 03:39:56.027158 | orchestrator |  "sdc": { 2026-01-30 03:39:56.027169 | orchestrator |  "osd_lvm_uuid": "a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b" 2026-01-30 03:39:56.027180 | orchestrator |  } 2026-01-30 03:39:56.027191 | orchestrator |  } 2026-01-30 03:39:56.027202 | orchestrator | } 2026-01-30 03:39:56.027213 | orchestrator | 2026-01-30 03:39:56.027224 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-30 03:39:56.027235 | orchestrator | Friday 30 January 2026 03:39:52 +0000 (0:00:00.144) 0:00:10.866 ******** 2026-01-30 03:39:56.027246 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.027256 | orchestrator | 2026-01-30 03:39:56.027267 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-30 03:39:56.027278 | orchestrator | Friday 30 January 2026 03:39:53 +0000 (0:00:00.133) 0:00:11.000 ******** 2026-01-30 03:39:56.027289 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.027300 | orchestrator | 2026-01-30 03:39:56.027311 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-30 03:39:56.027322 | orchestrator | Friday 30 January 2026 03:39:53 +0000 (0:00:00.139) 0:00:11.140 ******** 2026-01-30 03:39:56.027332 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:39:56.027343 | orchestrator | 2026-01-30 03:39:56.027354 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-30 03:39:56.027365 | orchestrator | Friday 30 January 2026 03:39:53 +0000 (0:00:00.126) 0:00:11.266 ******** 2026-01-30 03:39:56.027375 | orchestrator | changed: [testbed-node-3] => { 2026-01-30 03:39:56.027386 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-30 03:39:56.027397 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:39:56.027461 | orchestrator |  "sdb": { 2026-01-30 03:39:56.027473 | orchestrator |  "osd_lvm_uuid": "8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0" 2026-01-30 03:39:56.027511 | orchestrator |  }, 2026-01-30 03:39:56.027530 | orchestrator |  "sdc": { 2026-01-30 03:39:56.027542 | orchestrator |  "osd_lvm_uuid": "a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b" 2026-01-30 03:39:56.027553 | orchestrator |  } 2026-01-30 03:39:56.027564 | orchestrator |  }, 2026-01-30 03:39:56.027575 | orchestrator |  "lvm_volumes": [ 2026-01-30 03:39:56.027586 | orchestrator |  { 2026-01-30 03:39:56.027597 | orchestrator |  "data": "osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0", 2026-01-30 03:39:56.027608 | orchestrator |  "data_vg": "ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0" 2026-01-30 03:39:56.027619 | orchestrator |  }, 2026-01-30 03:39:56.027630 | orchestrator |  { 2026-01-30 03:39:56.027641 | orchestrator |  "data": "osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b", 2026-01-30 03:39:56.027660 | orchestrator |  "data_vg": "ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b" 2026-01-30 03:39:56.027671 | orchestrator |  } 2026-01-30 03:39:56.027682 | orchestrator |  ] 2026-01-30 03:39:56.027693 | orchestrator |  } 2026-01-30 03:39:56.027704 | orchestrator | } 2026-01-30 03:39:56.027715 | orchestrator | 2026-01-30 03:39:56.027726 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-30 03:39:56.027737 | orchestrator | Friday 30 January 2026 03:39:53 +0000 (0:00:00.369) 0:00:11.635 ******** 2026-01-30 03:39:56.027748 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 03:39:56.027758 | orchestrator | 2026-01-30 03:39:56.027769 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-30 03:39:56.027780 | orchestrator | 2026-01-30 03:39:56.027791 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:39:56.027802 | orchestrator | Friday 30 January 2026 03:39:55 +0000 (0:00:01.840) 0:00:13.476 ******** 2026-01-30 03:39:56.027813 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-30 03:39:56.027824 | orchestrator | 2026-01-30 03:39:56.027835 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:39:56.027846 | orchestrator | Friday 30 January 2026 03:39:55 +0000 (0:00:00.257) 0:00:13.734 ******** 2026-01-30 03:39:56.027857 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:39:56.027868 | orchestrator | 2026-01-30 03:39:56.027891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638563 | orchestrator | Friday 30 January 2026 03:39:56 +0000 (0:00:00.237) 0:00:13.971 ******** 2026-01-30 03:40:04.638640 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-30 03:40:04.638647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-30 03:40:04.638652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-30 03:40:04.638669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-30 03:40:04.638674 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-30 03:40:04.638678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-30 03:40:04.638682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-30 03:40:04.638686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-30 03:40:04.638691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-30 03:40:04.638695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-30 03:40:04.638699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-30 03:40:04.638703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-30 03:40:04.638707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-30 03:40:04.638711 | orchestrator | 2026-01-30 03:40:04.638716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638720 | orchestrator | Friday 30 January 2026 03:39:56 +0000 (0:00:00.391) 0:00:14.363 ******** 2026-01-30 03:40:04.638724 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638729 | orchestrator | 2026-01-30 03:40:04.638733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638737 | orchestrator | Friday 30 January 2026 03:39:56 +0000 (0:00:00.215) 0:00:14.578 ******** 2026-01-30 03:40:04.638741 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638745 | orchestrator | 2026-01-30 03:40:04.638749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638753 | orchestrator | Friday 30 January 2026 03:39:56 +0000 (0:00:00.213) 0:00:14.791 ******** 2026-01-30 03:40:04.638774 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638779 | orchestrator | 2026-01-30 03:40:04.638783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638787 | orchestrator | Friday 30 January 2026 03:39:57 +0000 (0:00:00.197) 0:00:14.989 ******** 2026-01-30 03:40:04.638790 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638794 | orchestrator | 2026-01-30 03:40:04.638798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638802 | orchestrator | Friday 30 January 2026 03:39:57 +0000 (0:00:00.560) 0:00:15.549 ******** 2026-01-30 03:40:04.638806 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638810 | orchestrator | 2026-01-30 03:40:04.638814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638818 | orchestrator | Friday 30 January 2026 03:39:57 +0000 (0:00:00.203) 0:00:15.753 ******** 2026-01-30 03:40:04.638822 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638826 | orchestrator | 2026-01-30 03:40:04.638830 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638834 | orchestrator | Friday 30 January 2026 03:39:58 +0000 (0:00:00.241) 0:00:15.994 ******** 2026-01-30 03:40:04.638838 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638842 | orchestrator | 2026-01-30 03:40:04.638846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638850 | orchestrator | Friday 30 January 2026 03:39:58 +0000 (0:00:00.211) 0:00:16.206 ******** 2026-01-30 03:40:04.638854 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.638858 | orchestrator | 2026-01-30 03:40:04.638862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638866 | orchestrator | Friday 30 January 2026 03:39:58 +0000 (0:00:00.193) 0:00:16.399 ******** 2026-01-30 03:40:04.638870 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb) 2026-01-30 03:40:04.638875 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb) 2026-01-30 03:40:04.638879 | orchestrator | 2026-01-30 03:40:04.638883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638887 | orchestrator | Friday 30 January 2026 03:39:58 +0000 (0:00:00.414) 0:00:16.814 ******** 2026-01-30 03:40:04.638891 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea) 2026-01-30 03:40:04.638895 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea) 2026-01-30 03:40:04.638899 | orchestrator | 2026-01-30 03:40:04.638903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638907 | orchestrator | Friday 30 January 2026 03:39:59 +0000 (0:00:00.421) 0:00:17.236 ******** 2026-01-30 03:40:04.638911 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4) 2026-01-30 03:40:04.638915 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4) 2026-01-30 03:40:04.638919 | orchestrator | 2026-01-30 03:40:04.638923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638936 | orchestrator | Friday 30 January 2026 03:39:59 +0000 (0:00:00.398) 0:00:17.634 ******** 2026-01-30 03:40:04.638940 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c) 2026-01-30 03:40:04.638944 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c) 2026-01-30 03:40:04.638948 | orchestrator | 2026-01-30 03:40:04.638952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:04.638959 | orchestrator | Friday 30 January 2026 03:40:00 +0000 (0:00:00.618) 0:00:18.252 ******** 2026-01-30 03:40:04.638963 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:40:04.638972 | orchestrator | 2026-01-30 03:40:04.638976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.638980 | orchestrator | Friday 30 January 2026 03:40:00 +0000 (0:00:00.541) 0:00:18.794 ******** 2026-01-30 03:40:04.638984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-30 03:40:04.638988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-30 03:40:04.638992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-30 03:40:04.638996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-30 03:40:04.639000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-30 03:40:04.639004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-30 03:40:04.639008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-30 03:40:04.639012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-30 03:40:04.639016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-30 03:40:04.639020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-30 03:40:04.639024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-30 03:40:04.639028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-30 03:40:04.639032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-30 03:40:04.639036 | orchestrator | 2026-01-30 03:40:04.639040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639044 | orchestrator | Friday 30 January 2026 03:40:01 +0000 (0:00:00.790) 0:00:19.584 ******** 2026-01-30 03:40:04.639048 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639052 | orchestrator | 2026-01-30 03:40:04.639056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639060 | orchestrator | Friday 30 January 2026 03:40:01 +0000 (0:00:00.217) 0:00:19.801 ******** 2026-01-30 03:40:04.639064 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639068 | orchestrator | 2026-01-30 03:40:04.639072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639077 | orchestrator | Friday 30 January 2026 03:40:02 +0000 (0:00:00.210) 0:00:20.012 ******** 2026-01-30 03:40:04.639081 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639086 | orchestrator | 2026-01-30 03:40:04.639091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639095 | orchestrator | Friday 30 January 2026 03:40:02 +0000 (0:00:00.196) 0:00:20.209 ******** 2026-01-30 03:40:04.639100 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639104 | orchestrator | 2026-01-30 03:40:04.639109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639114 | orchestrator | Friday 30 January 2026 03:40:02 +0000 (0:00:00.205) 0:00:20.414 ******** 2026-01-30 03:40:04.639118 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639123 | orchestrator | 2026-01-30 03:40:04.639127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639132 | orchestrator | Friday 30 January 2026 03:40:02 +0000 (0:00:00.208) 0:00:20.623 ******** 2026-01-30 03:40:04.639137 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639141 | orchestrator | 2026-01-30 03:40:04.639146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639151 | orchestrator | Friday 30 January 2026 03:40:02 +0000 (0:00:00.208) 0:00:20.832 ******** 2026-01-30 03:40:04.639156 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639166 | orchestrator | 2026-01-30 03:40:04.639171 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639175 | orchestrator | Friday 30 January 2026 03:40:03 +0000 (0:00:00.201) 0:00:21.033 ******** 2026-01-30 03:40:04.639180 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:04.639184 | orchestrator | 2026-01-30 03:40:04.639189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639193 | orchestrator | Friday 30 January 2026 03:40:03 +0000 (0:00:00.191) 0:00:21.225 ******** 2026-01-30 03:40:04.639198 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-30 03:40:04.639203 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-30 03:40:04.639208 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-30 03:40:04.639212 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-30 03:40:04.639217 | orchestrator | 2026-01-30 03:40:04.639222 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:04.639227 | orchestrator | Friday 30 January 2026 03:40:04 +0000 (0:00:00.800) 0:00:22.026 ******** 2026-01-30 03:40:04.639231 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.411607 | orchestrator | 2026-01-30 03:40:10.411720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:10.411737 | orchestrator | Friday 30 January 2026 03:40:04 +0000 (0:00:00.558) 0:00:22.585 ******** 2026-01-30 03:40:10.411749 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.411761 | orchestrator | 2026-01-30 03:40:10.411772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:10.411784 | orchestrator | Friday 30 January 2026 03:40:04 +0000 (0:00:00.203) 0:00:22.789 ******** 2026-01-30 03:40:10.411812 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.411824 | orchestrator | 2026-01-30 03:40:10.411835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:10.411846 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.205) 0:00:22.995 ******** 2026-01-30 03:40:10.411857 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.411868 | orchestrator | 2026-01-30 03:40:10.411879 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-30 03:40:10.411890 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.204) 0:00:23.199 ******** 2026-01-30 03:40:10.411901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-30 03:40:10.411912 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-30 03:40:10.411923 | orchestrator | 2026-01-30 03:40:10.411934 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-30 03:40:10.411945 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.185) 0:00:23.384 ******** 2026-01-30 03:40:10.411957 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.411976 | orchestrator | 2026-01-30 03:40:10.411995 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-30 03:40:10.412012 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.157) 0:00:23.542 ******** 2026-01-30 03:40:10.412029 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412047 | orchestrator | 2026-01-30 03:40:10.412065 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-30 03:40:10.412084 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.142) 0:00:23.684 ******** 2026-01-30 03:40:10.412102 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412121 | orchestrator | 2026-01-30 03:40:10.412140 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-30 03:40:10.412161 | orchestrator | Friday 30 January 2026 03:40:05 +0000 (0:00:00.137) 0:00:23.822 ******** 2026-01-30 03:40:10.412180 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:40:10.412200 | orchestrator | 2026-01-30 03:40:10.412217 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-30 03:40:10.412231 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.138) 0:00:23.961 ******** 2026-01-30 03:40:10.412273 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}}) 2026-01-30 03:40:10.412287 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1704272-fd93-5be5-acd9-a48498ed5939'}}) 2026-01-30 03:40:10.412300 | orchestrator | 2026-01-30 03:40:10.412312 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-30 03:40:10.412325 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.171) 0:00:24.132 ******** 2026-01-30 03:40:10.412338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}})  2026-01-30 03:40:10.412352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1704272-fd93-5be5-acd9-a48498ed5939'}})  2026-01-30 03:40:10.412364 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412377 | orchestrator | 2026-01-30 03:40:10.412389 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-30 03:40:10.412402 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.141) 0:00:24.274 ******** 2026-01-30 03:40:10.412414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}})  2026-01-30 03:40:10.412427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1704272-fd93-5be5-acd9-a48498ed5939'}})  2026-01-30 03:40:10.412440 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412452 | orchestrator | 2026-01-30 03:40:10.412465 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-30 03:40:10.412476 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.331) 0:00:24.606 ******** 2026-01-30 03:40:10.412541 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}})  2026-01-30 03:40:10.412553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1704272-fd93-5be5-acd9-a48498ed5939'}})  2026-01-30 03:40:10.412564 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412575 | orchestrator | 2026-01-30 03:40:10.412586 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-30 03:40:10.412597 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.151) 0:00:24.757 ******** 2026-01-30 03:40:10.412608 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:40:10.412619 | orchestrator | 2026-01-30 03:40:10.412630 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-30 03:40:10.412641 | orchestrator | Friday 30 January 2026 03:40:06 +0000 (0:00:00.169) 0:00:24.927 ******** 2026-01-30 03:40:10.412652 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:40:10.412662 | orchestrator | 2026-01-30 03:40:10.412673 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-30 03:40:10.412685 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.144) 0:00:25.072 ******** 2026-01-30 03:40:10.412716 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412728 | orchestrator | 2026-01-30 03:40:10.412739 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-30 03:40:10.412750 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.131) 0:00:25.204 ******** 2026-01-30 03:40:10.412761 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412772 | orchestrator | 2026-01-30 03:40:10.412783 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-30 03:40:10.412794 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.131) 0:00:25.335 ******** 2026-01-30 03:40:10.412813 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.412825 | orchestrator | 2026-01-30 03:40:10.412836 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-30 03:40:10.412847 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.123) 0:00:25.459 ******** 2026-01-30 03:40:10.412867 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:40:10.412878 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:40:10.412890 | orchestrator |  "sdb": { 2026-01-30 03:40:10.412902 | orchestrator |  "osd_lvm_uuid": "3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267" 2026-01-30 03:40:10.412913 | orchestrator |  }, 2026-01-30 03:40:10.412924 | orchestrator |  "sdc": { 2026-01-30 03:40:10.412935 | orchestrator |  "osd_lvm_uuid": "a1704272-fd93-5be5-acd9-a48498ed5939" 2026-01-30 03:40:10.412947 | orchestrator |  } 2026-01-30 03:40:10.412958 | orchestrator |  } 2026-01-30 03:40:10.412969 | orchestrator | } 2026-01-30 03:40:10.412980 | orchestrator | 2026-01-30 03:40:10.412991 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-30 03:40:10.413002 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.134) 0:00:25.593 ******** 2026-01-30 03:40:10.413013 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.413025 | orchestrator | 2026-01-30 03:40:10.413035 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-30 03:40:10.413046 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.125) 0:00:25.719 ******** 2026-01-30 03:40:10.413057 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.413068 | orchestrator | 2026-01-30 03:40:10.413079 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-30 03:40:10.413091 | orchestrator | Friday 30 January 2026 03:40:07 +0000 (0:00:00.134) 0:00:25.854 ******** 2026-01-30 03:40:10.413102 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:40:10.413112 | orchestrator | 2026-01-30 03:40:10.413126 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-30 03:40:10.413144 | orchestrator | Friday 30 January 2026 03:40:08 +0000 (0:00:00.130) 0:00:25.984 ******** 2026-01-30 03:40:10.413163 | orchestrator | changed: [testbed-node-4] => { 2026-01-30 03:40:10.413181 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-30 03:40:10.413199 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:40:10.413216 | orchestrator |  "sdb": { 2026-01-30 03:40:10.413235 | orchestrator |  "osd_lvm_uuid": "3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267" 2026-01-30 03:40:10.413252 | orchestrator |  }, 2026-01-30 03:40:10.413270 | orchestrator |  "sdc": { 2026-01-30 03:40:10.413288 | orchestrator |  "osd_lvm_uuid": "a1704272-fd93-5be5-acd9-a48498ed5939" 2026-01-30 03:40:10.413307 | orchestrator |  } 2026-01-30 03:40:10.413325 | orchestrator |  }, 2026-01-30 03:40:10.413343 | orchestrator |  "lvm_volumes": [ 2026-01-30 03:40:10.413362 | orchestrator |  { 2026-01-30 03:40:10.413381 | orchestrator |  "data": "osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267", 2026-01-30 03:40:10.413399 | orchestrator |  "data_vg": "ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267" 2026-01-30 03:40:10.413417 | orchestrator |  }, 2026-01-30 03:40:10.413435 | orchestrator |  { 2026-01-30 03:40:10.413453 | orchestrator |  "data": "osd-block-a1704272-fd93-5be5-acd9-a48498ed5939", 2026-01-30 03:40:10.413512 | orchestrator |  "data_vg": "ceph-a1704272-fd93-5be5-acd9-a48498ed5939" 2026-01-30 03:40:10.413533 | orchestrator |  } 2026-01-30 03:40:10.413568 | orchestrator |  ] 2026-01-30 03:40:10.413587 | orchestrator |  } 2026-01-30 03:40:10.413605 | orchestrator | } 2026-01-30 03:40:10.413624 | orchestrator | 2026-01-30 03:40:10.413635 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-30 03:40:10.413646 | orchestrator | Friday 30 January 2026 03:40:08 +0000 (0:00:00.389) 0:00:26.374 ******** 2026-01-30 03:40:10.413657 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-30 03:40:10.413667 | orchestrator | 2026-01-30 03:40:10.413678 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-30 03:40:10.413689 | orchestrator | 2026-01-30 03:40:10.413700 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:40:10.413712 | orchestrator | Friday 30 January 2026 03:40:09 +0000 (0:00:01.123) 0:00:27.497 ******** 2026-01-30 03:40:10.413733 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-30 03:40:10.413744 | orchestrator | 2026-01-30 03:40:10.413755 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:40:10.413766 | orchestrator | Friday 30 January 2026 03:40:09 +0000 (0:00:00.254) 0:00:27.752 ******** 2026-01-30 03:40:10.413776 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:40:10.413787 | orchestrator | 2026-01-30 03:40:10.413799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:10.413809 | orchestrator | Friday 30 January 2026 03:40:10 +0000 (0:00:00.233) 0:00:27.985 ******** 2026-01-30 03:40:10.413820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-30 03:40:10.413831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-30 03:40:10.413842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-30 03:40:10.413853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-30 03:40:10.413864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-30 03:40:10.413886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-30 03:40:18.424100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-30 03:40:18.424194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-30 03:40:18.424205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-30 03:40:18.424210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-30 03:40:18.424229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-30 03:40:18.424234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-30 03:40:18.424239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-30 03:40:18.424244 | orchestrator | 2026-01-30 03:40:18.424250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424255 | orchestrator | Friday 30 January 2026 03:40:10 +0000 (0:00:00.368) 0:00:28.353 ******** 2026-01-30 03:40:18.424261 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424267 | orchestrator | 2026-01-30 03:40:18.424272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424277 | orchestrator | Friday 30 January 2026 03:40:10 +0000 (0:00:00.199) 0:00:28.552 ******** 2026-01-30 03:40:18.424282 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424287 | orchestrator | 2026-01-30 03:40:18.424292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424296 | orchestrator | Friday 30 January 2026 03:40:10 +0000 (0:00:00.173) 0:00:28.726 ******** 2026-01-30 03:40:18.424301 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424306 | orchestrator | 2026-01-30 03:40:18.424311 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424316 | orchestrator | Friday 30 January 2026 03:40:10 +0000 (0:00:00.182) 0:00:28.909 ******** 2026-01-30 03:40:18.424321 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424326 | orchestrator | 2026-01-30 03:40:18.424334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424342 | orchestrator | Friday 30 January 2026 03:40:11 +0000 (0:00:00.533) 0:00:29.443 ******** 2026-01-30 03:40:18.424350 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424358 | orchestrator | 2026-01-30 03:40:18.424365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424372 | orchestrator | Friday 30 January 2026 03:40:11 +0000 (0:00:00.210) 0:00:29.654 ******** 2026-01-30 03:40:18.424403 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424412 | orchestrator | 2026-01-30 03:40:18.424421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424429 | orchestrator | Friday 30 January 2026 03:40:11 +0000 (0:00:00.203) 0:00:29.857 ******** 2026-01-30 03:40:18.424437 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424444 | orchestrator | 2026-01-30 03:40:18.424449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424453 | orchestrator | Friday 30 January 2026 03:40:12 +0000 (0:00:00.195) 0:00:30.053 ******** 2026-01-30 03:40:18.424458 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424463 | orchestrator | 2026-01-30 03:40:18.424468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424472 | orchestrator | Friday 30 January 2026 03:40:12 +0000 (0:00:00.200) 0:00:30.254 ******** 2026-01-30 03:40:18.424521 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844) 2026-01-30 03:40:18.424527 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844) 2026-01-30 03:40:18.424532 | orchestrator | 2026-01-30 03:40:18.424537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424542 | orchestrator | Friday 30 January 2026 03:40:12 +0000 (0:00:00.400) 0:00:30.654 ******** 2026-01-30 03:40:18.424546 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de) 2026-01-30 03:40:18.424551 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de) 2026-01-30 03:40:18.424556 | orchestrator | 2026-01-30 03:40:18.424561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424566 | orchestrator | Friday 30 January 2026 03:40:13 +0000 (0:00:00.415) 0:00:31.070 ******** 2026-01-30 03:40:18.424571 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660) 2026-01-30 03:40:18.424576 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660) 2026-01-30 03:40:18.424580 | orchestrator | 2026-01-30 03:40:18.424585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424590 | orchestrator | Friday 30 January 2026 03:40:13 +0000 (0:00:00.406) 0:00:31.476 ******** 2026-01-30 03:40:18.424595 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290) 2026-01-30 03:40:18.424600 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290) 2026-01-30 03:40:18.424605 | orchestrator | 2026-01-30 03:40:18.424610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:40:18.424615 | orchestrator | Friday 30 January 2026 03:40:13 +0000 (0:00:00.412) 0:00:31.888 ******** 2026-01-30 03:40:18.424620 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:40:18.424624 | orchestrator | 2026-01-30 03:40:18.424629 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424647 | orchestrator | Friday 30 January 2026 03:40:14 +0000 (0:00:00.338) 0:00:32.227 ******** 2026-01-30 03:40:18.424652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-30 03:40:18.424657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-30 03:40:18.424662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-30 03:40:18.424672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-30 03:40:18.424677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-30 03:40:18.424695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-30 03:40:18.424705 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-30 03:40:18.424710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-30 03:40:18.424714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-30 03:40:18.424719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-30 03:40:18.424724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-30 03:40:18.424729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-30 03:40:18.424733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-30 03:40:18.424738 | orchestrator | 2026-01-30 03:40:18.424743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424748 | orchestrator | Friday 30 January 2026 03:40:14 +0000 (0:00:00.544) 0:00:32.772 ******** 2026-01-30 03:40:18.424753 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424757 | orchestrator | 2026-01-30 03:40:18.424762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424767 | orchestrator | Friday 30 January 2026 03:40:15 +0000 (0:00:00.209) 0:00:32.982 ******** 2026-01-30 03:40:18.424772 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424776 | orchestrator | 2026-01-30 03:40:18.424781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424786 | orchestrator | Friday 30 January 2026 03:40:15 +0000 (0:00:00.190) 0:00:33.172 ******** 2026-01-30 03:40:18.424791 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424796 | orchestrator | 2026-01-30 03:40:18.424801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424805 | orchestrator | Friday 30 January 2026 03:40:15 +0000 (0:00:00.207) 0:00:33.379 ******** 2026-01-30 03:40:18.424810 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424815 | orchestrator | 2026-01-30 03:40:18.424820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424825 | orchestrator | Friday 30 January 2026 03:40:15 +0000 (0:00:00.201) 0:00:33.580 ******** 2026-01-30 03:40:18.424829 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424834 | orchestrator | 2026-01-30 03:40:18.424839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424844 | orchestrator | Friday 30 January 2026 03:40:15 +0000 (0:00:00.198) 0:00:33.779 ******** 2026-01-30 03:40:18.424848 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424853 | orchestrator | 2026-01-30 03:40:18.424858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424863 | orchestrator | Friday 30 January 2026 03:40:16 +0000 (0:00:00.213) 0:00:33.993 ******** 2026-01-30 03:40:18.424867 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424872 | orchestrator | 2026-01-30 03:40:18.424877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424882 | orchestrator | Friday 30 January 2026 03:40:16 +0000 (0:00:00.197) 0:00:34.190 ******** 2026-01-30 03:40:18.424887 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424891 | orchestrator | 2026-01-30 03:40:18.424896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424902 | orchestrator | Friday 30 January 2026 03:40:16 +0000 (0:00:00.201) 0:00:34.391 ******** 2026-01-30 03:40:18.424906 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-30 03:40:18.424911 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-30 03:40:18.424917 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-30 03:40:18.424921 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-30 03:40:18.424926 | orchestrator | 2026-01-30 03:40:18.424936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424941 | orchestrator | Friday 30 January 2026 03:40:17 +0000 (0:00:00.792) 0:00:35.184 ******** 2026-01-30 03:40:18.424946 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424950 | orchestrator | 2026-01-30 03:40:18.424955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424960 | orchestrator | Friday 30 January 2026 03:40:17 +0000 (0:00:00.186) 0:00:35.371 ******** 2026-01-30 03:40:18.424965 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424969 | orchestrator | 2026-01-30 03:40:18.424974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424979 | orchestrator | Friday 30 January 2026 03:40:17 +0000 (0:00:00.192) 0:00:35.564 ******** 2026-01-30 03:40:18.424984 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.424988 | orchestrator | 2026-01-30 03:40:18.424993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:40:18.424998 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.600) 0:00:36.164 ******** 2026-01-30 03:40:18.425003 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:18.425008 | orchestrator | 2026-01-30 03:40:18.425016 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-30 03:40:22.183211 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.206) 0:00:36.370 ******** 2026-01-30 03:40:22.183305 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-30 03:40:22.183317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-30 03:40:22.183326 | orchestrator | 2026-01-30 03:40:22.183336 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-30 03:40:22.183360 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.179) 0:00:36.550 ******** 2026-01-30 03:40:22.183369 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183377 | orchestrator | 2026-01-30 03:40:22.183385 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-30 03:40:22.183393 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.136) 0:00:36.686 ******** 2026-01-30 03:40:22.183402 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183410 | orchestrator | 2026-01-30 03:40:22.183418 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-30 03:40:22.183426 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.138) 0:00:36.825 ******** 2026-01-30 03:40:22.183434 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183442 | orchestrator | 2026-01-30 03:40:22.183449 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-30 03:40:22.183457 | orchestrator | Friday 30 January 2026 03:40:18 +0000 (0:00:00.122) 0:00:36.947 ******** 2026-01-30 03:40:22.183465 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:40:22.183535 | orchestrator | 2026-01-30 03:40:22.183545 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-30 03:40:22.183553 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.139) 0:00:37.086 ******** 2026-01-30 03:40:22.183562 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c96ee3ed-1860-5729-adba-bbe0a3b53c50'}}) 2026-01-30 03:40:22.183570 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}}) 2026-01-30 03:40:22.183578 | orchestrator | 2026-01-30 03:40:22.183587 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-30 03:40:22.183595 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.159) 0:00:37.246 ******** 2026-01-30 03:40:22.183603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c96ee3ed-1860-5729-adba-bbe0a3b53c50'}})  2026-01-30 03:40:22.183613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}})  2026-01-30 03:40:22.183621 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183649 | orchestrator | 2026-01-30 03:40:22.183657 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-30 03:40:22.183665 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.134) 0:00:37.381 ******** 2026-01-30 03:40:22.183673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c96ee3ed-1860-5729-adba-bbe0a3b53c50'}})  2026-01-30 03:40:22.183681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}})  2026-01-30 03:40:22.183689 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183697 | orchestrator | 2026-01-30 03:40:22.183705 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-30 03:40:22.183713 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.151) 0:00:37.532 ******** 2026-01-30 03:40:22.183722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c96ee3ed-1860-5729-adba-bbe0a3b53c50'}})  2026-01-30 03:40:22.183730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}})  2026-01-30 03:40:22.183738 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183745 | orchestrator | 2026-01-30 03:40:22.183753 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-30 03:40:22.183761 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.155) 0:00:37.687 ******** 2026-01-30 03:40:22.183769 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:40:22.183777 | orchestrator | 2026-01-30 03:40:22.183785 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-30 03:40:22.183793 | orchestrator | Friday 30 January 2026 03:40:19 +0000 (0:00:00.126) 0:00:37.814 ******** 2026-01-30 03:40:22.183801 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:40:22.183809 | orchestrator | 2026-01-30 03:40:22.183817 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-30 03:40:22.183825 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.302) 0:00:38.116 ******** 2026-01-30 03:40:22.183833 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183841 | orchestrator | 2026-01-30 03:40:22.183849 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-30 03:40:22.183857 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.125) 0:00:38.241 ******** 2026-01-30 03:40:22.183865 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183873 | orchestrator | 2026-01-30 03:40:22.183881 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-30 03:40:22.183889 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.124) 0:00:38.366 ******** 2026-01-30 03:40:22.183897 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.183904 | orchestrator | 2026-01-30 03:40:22.183912 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-30 03:40:22.183920 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.132) 0:00:38.499 ******** 2026-01-30 03:40:22.183928 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:40:22.183936 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:40:22.183945 | orchestrator |  "sdb": { 2026-01-30 03:40:22.183969 | orchestrator |  "osd_lvm_uuid": "c96ee3ed-1860-5729-adba-bbe0a3b53c50" 2026-01-30 03:40:22.183978 | orchestrator |  }, 2026-01-30 03:40:22.183986 | orchestrator |  "sdc": { 2026-01-30 03:40:22.183994 | orchestrator |  "osd_lvm_uuid": "484c5dd7-ec3c-5b7c-8938-cd2a84a156dd" 2026-01-30 03:40:22.184003 | orchestrator |  } 2026-01-30 03:40:22.184011 | orchestrator |  } 2026-01-30 03:40:22.184019 | orchestrator | } 2026-01-30 03:40:22.184027 | orchestrator | 2026-01-30 03:40:22.184040 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-30 03:40:22.184048 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.121) 0:00:38.621 ******** 2026-01-30 03:40:22.184057 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.184072 | orchestrator | 2026-01-30 03:40:22.184079 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-30 03:40:22.184087 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.114) 0:00:38.736 ******** 2026-01-30 03:40:22.184095 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.184108 | orchestrator | 2026-01-30 03:40:22.184121 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-30 03:40:22.184134 | orchestrator | Friday 30 January 2026 03:40:20 +0000 (0:00:00.125) 0:00:38.861 ******** 2026-01-30 03:40:22.184147 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:40:22.184160 | orchestrator | 2026-01-30 03:40:22.184173 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-30 03:40:22.184186 | orchestrator | Friday 30 January 2026 03:40:21 +0000 (0:00:00.122) 0:00:38.983 ******** 2026-01-30 03:40:22.184200 | orchestrator | changed: [testbed-node-5] => { 2026-01-30 03:40:22.184213 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-30 03:40:22.184225 | orchestrator |  "ceph_osd_devices": { 2026-01-30 03:40:22.184233 | orchestrator |  "sdb": { 2026-01-30 03:40:22.184241 | orchestrator |  "osd_lvm_uuid": "c96ee3ed-1860-5729-adba-bbe0a3b53c50" 2026-01-30 03:40:22.184249 | orchestrator |  }, 2026-01-30 03:40:22.184257 | orchestrator |  "sdc": { 2026-01-30 03:40:22.184265 | orchestrator |  "osd_lvm_uuid": "484c5dd7-ec3c-5b7c-8938-cd2a84a156dd" 2026-01-30 03:40:22.184273 | orchestrator |  } 2026-01-30 03:40:22.184281 | orchestrator |  }, 2026-01-30 03:40:22.184289 | orchestrator |  "lvm_volumes": [ 2026-01-30 03:40:22.184297 | orchestrator |  { 2026-01-30 03:40:22.184305 | orchestrator |  "data": "osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50", 2026-01-30 03:40:22.184313 | orchestrator |  "data_vg": "ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50" 2026-01-30 03:40:22.184320 | orchestrator |  }, 2026-01-30 03:40:22.184328 | orchestrator |  { 2026-01-30 03:40:22.184336 | orchestrator |  "data": "osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd", 2026-01-30 03:40:22.184344 | orchestrator |  "data_vg": "ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd" 2026-01-30 03:40:22.184352 | orchestrator |  } 2026-01-30 03:40:22.184360 | orchestrator |  ] 2026-01-30 03:40:22.184368 | orchestrator |  } 2026-01-30 03:40:22.184376 | orchestrator | } 2026-01-30 03:40:22.184384 | orchestrator | 2026-01-30 03:40:22.184392 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-30 03:40:22.184400 | orchestrator | Friday 30 January 2026 03:40:21 +0000 (0:00:00.192) 0:00:39.176 ******** 2026-01-30 03:40:22.184407 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-30 03:40:22.184415 | orchestrator | 2026-01-30 03:40:22.184423 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:40:22.184431 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 03:40:22.184440 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 03:40:22.184448 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 03:40:22.184456 | orchestrator | 2026-01-30 03:40:22.184464 | orchestrator | 2026-01-30 03:40:22.184471 | orchestrator | 2026-01-30 03:40:22.184510 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:40:22.184522 | orchestrator | Friday 30 January 2026 03:40:22 +0000 (0:00:00.935) 0:00:40.112 ******** 2026-01-30 03:40:22.184533 | orchestrator | =============================================================================== 2026-01-30 03:40:22.184545 | orchestrator | Write configuration file ------------------------------------------------ 3.90s 2026-01-30 03:40:22.184567 | orchestrator | Add known partitions to the list of available block devices ------------- 1.66s 2026-01-30 03:40:22.184581 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2026-01-30 03:40:22.184590 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-01-30 03:40:22.184598 | orchestrator | Print configuration data ------------------------------------------------ 0.95s 2026-01-30 03:40:22.184606 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-01-30 03:40:22.184614 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-01-30 03:40:22.184622 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2026-01-30 03:40:22.184630 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-01-30 03:40:22.184638 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2026-01-30 03:40:22.184646 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2026-01-30 03:40:22.184653 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-01-30 03:40:22.184661 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.60s 2026-01-30 03:40:22.184677 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2026-01-30 03:40:22.516310 | orchestrator | Set OSD devices config data --------------------------------------------- 0.60s 2026-01-30 03:40:22.516412 | orchestrator | Add known links to the list of available block devices ------------------ 0.56s 2026-01-30 03:40:22.516428 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2026-01-30 03:40:22.516458 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2026-01-30 03:40:22.516544 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-01-30 03:40:22.516571 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.54s 2026-01-30 03:40:44.873594 | orchestrator | 2026-01-30 03:40:44 | INFO  | Task 04e03824-b901-4173-82b7-db3a215511bd (sync inventory) is running in background. Output coming soon. 2026-01-30 03:41:10.701076 | orchestrator | 2026-01-30 03:40:46 | INFO  | Starting group_vars file reorganization 2026-01-30 03:41:10.701190 | orchestrator | 2026-01-30 03:40:46 | INFO  | Moved 0 file(s) to their respective directories 2026-01-30 03:41:10.701207 | orchestrator | 2026-01-30 03:40:46 | INFO  | Group_vars file reorganization completed 2026-01-30 03:41:10.701216 | orchestrator | 2026-01-30 03:40:48 | INFO  | Starting variable preparation from inventory 2026-01-30 03:41:10.701224 | orchestrator | 2026-01-30 03:40:51 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-30 03:41:10.701248 | orchestrator | 2026-01-30 03:40:51 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-30 03:41:10.701257 | orchestrator | 2026-01-30 03:40:51 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-30 03:41:10.701265 | orchestrator | 2026-01-30 03:40:51 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-30 03:41:10.701273 | orchestrator | 2026-01-30 03:40:51 | INFO  | Variable preparation completed 2026-01-30 03:41:10.701282 | orchestrator | 2026-01-30 03:40:52 | INFO  | Starting inventory overwrite handling 2026-01-30 03:41:10.701290 | orchestrator | 2026-01-30 03:40:52 | INFO  | Handling group overwrites in 99-overwrite 2026-01-30 03:41:10.701298 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removing group frr:children from 60-generic 2026-01-30 03:41:10.701306 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-30 03:41:10.701315 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-30 03:41:10.701350 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-30 03:41:10.701359 | orchestrator | 2026-01-30 03:40:52 | INFO  | Handling group overwrites in 20-roles 2026-01-30 03:41:10.701367 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-30 03:41:10.701375 | orchestrator | 2026-01-30 03:40:52 | INFO  | Removed 5 group(s) in total 2026-01-30 03:41:10.701383 | orchestrator | 2026-01-30 03:40:52 | INFO  | Inventory overwrite handling completed 2026-01-30 03:41:10.701392 | orchestrator | 2026-01-30 03:40:53 | INFO  | Starting merge of inventory files 2026-01-30 03:41:10.701400 | orchestrator | 2026-01-30 03:40:53 | INFO  | Inventory files merged successfully 2026-01-30 03:41:10.701408 | orchestrator | 2026-01-30 03:40:57 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-30 03:41:10.701416 | orchestrator | 2026-01-30 03:41:09 | INFO  | Successfully wrote ClusterShell configuration 2026-01-30 03:41:10.701424 | orchestrator | [master 0293844] 2026-01-30-03-41 2026-01-30 03:41:10.701434 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-30 03:41:12.875960 | orchestrator | 2026-01-30 03:41:12 | INFO  | Task ff213934-60ba-4fdb-88da-265c77fc0416 (ceph-create-lvm-devices) was prepared for execution. 2026-01-30 03:41:12.876047 | orchestrator | 2026-01-30 03:41:12 | INFO  | It takes a moment until task ff213934-60ba-4fdb-88da-265c77fc0416 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-30 03:41:24.090152 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 03:41:24.090327 | orchestrator | 2.16.14 2026-01-30 03:41:24.090347 | orchestrator | 2026-01-30 03:41:24.090358 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-30 03:41:24.090369 | orchestrator | 2026-01-30 03:41:24.090380 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:41:24.090418 | orchestrator | Friday 30 January 2026 03:41:17 +0000 (0:00:00.293) 0:00:00.293 ******** 2026-01-30 03:41:24.090431 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 03:41:24.090441 | orchestrator | 2026-01-30 03:41:24.090509 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:41:24.090519 | orchestrator | Friday 30 January 2026 03:41:17 +0000 (0:00:00.239) 0:00:00.533 ******** 2026-01-30 03:41:24.090530 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:24.090540 | orchestrator | 2026-01-30 03:41:24.090550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.090566 | orchestrator | Friday 30 January 2026 03:41:17 +0000 (0:00:00.219) 0:00:00.753 ******** 2026-01-30 03:41:24.090582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-30 03:41:24.090610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-30 03:41:24.090646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-30 03:41:24.090664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-30 03:41:24.090681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-30 03:41:24.090698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-30 03:41:24.090714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-30 03:41:24.090731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-30 03:41:24.090748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-30 03:41:24.090766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-30 03:41:24.090810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-30 03:41:24.090827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-30 03:41:24.090838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-30 03:41:24.090848 | orchestrator | 2026-01-30 03:41:24.090858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.090868 | orchestrator | Friday 30 January 2026 03:41:17 +0000 (0:00:00.477) 0:00:01.231 ******** 2026-01-30 03:41:24.090877 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.090887 | orchestrator | 2026-01-30 03:41:24.090897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.090907 | orchestrator | Friday 30 January 2026 03:41:18 +0000 (0:00:00.206) 0:00:01.437 ******** 2026-01-30 03:41:24.090917 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.090926 | orchestrator | 2026-01-30 03:41:24.090936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.090946 | orchestrator | Friday 30 January 2026 03:41:18 +0000 (0:00:00.200) 0:00:01.638 ******** 2026-01-30 03:41:24.090955 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.090965 | orchestrator | 2026-01-30 03:41:24.090975 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.090985 | orchestrator | Friday 30 January 2026 03:41:18 +0000 (0:00:00.192) 0:00:01.830 ******** 2026-01-30 03:41:24.090994 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091004 | orchestrator | 2026-01-30 03:41:24.091014 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091023 | orchestrator | Friday 30 January 2026 03:41:18 +0000 (0:00:00.196) 0:00:02.027 ******** 2026-01-30 03:41:24.091033 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091043 | orchestrator | 2026-01-30 03:41:24.091053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091063 | orchestrator | Friday 30 January 2026 03:41:18 +0000 (0:00:00.200) 0:00:02.227 ******** 2026-01-30 03:41:24.091072 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091082 | orchestrator | 2026-01-30 03:41:24.091092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091101 | orchestrator | Friday 30 January 2026 03:41:19 +0000 (0:00:00.192) 0:00:02.419 ******** 2026-01-30 03:41:24.091111 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091121 | orchestrator | 2026-01-30 03:41:24.091130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091140 | orchestrator | Friday 30 January 2026 03:41:19 +0000 (0:00:00.200) 0:00:02.620 ******** 2026-01-30 03:41:24.091150 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091159 | orchestrator | 2026-01-30 03:41:24.091169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091179 | orchestrator | Friday 30 January 2026 03:41:19 +0000 (0:00:00.197) 0:00:02.817 ******** 2026-01-30 03:41:24.091189 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a) 2026-01-30 03:41:24.091200 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a) 2026-01-30 03:41:24.091209 | orchestrator | 2026-01-30 03:41:24.091219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091247 | orchestrator | Friday 30 January 2026 03:41:19 +0000 (0:00:00.396) 0:00:03.214 ******** 2026-01-30 03:41:24.091257 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e) 2026-01-30 03:41:24.091267 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e) 2026-01-30 03:41:24.091276 | orchestrator | 2026-01-30 03:41:24.091286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091303 | orchestrator | Friday 30 January 2026 03:41:20 +0000 (0:00:00.592) 0:00:03.807 ******** 2026-01-30 03:41:24.091313 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c) 2026-01-30 03:41:24.091322 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c) 2026-01-30 03:41:24.091332 | orchestrator | 2026-01-30 03:41:24.091341 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091351 | orchestrator | Friday 30 January 2026 03:41:21 +0000 (0:00:00.608) 0:00:04.415 ******** 2026-01-30 03:41:24.091361 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db) 2026-01-30 03:41:24.091376 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db) 2026-01-30 03:41:24.091386 | orchestrator | 2026-01-30 03:41:24.091396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:24.091406 | orchestrator | Friday 30 January 2026 03:41:21 +0000 (0:00:00.783) 0:00:05.199 ******** 2026-01-30 03:41:24.091416 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:41:24.091426 | orchestrator | 2026-01-30 03:41:24.091435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.091525 | orchestrator | Friday 30 January 2026 03:41:22 +0000 (0:00:00.340) 0:00:05.539 ******** 2026-01-30 03:41:24.091545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-30 03:41:24.091560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-30 03:41:24.091577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-30 03:41:24.091594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-30 03:41:24.091611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-30 03:41:24.091627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-30 03:41:24.091643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-30 03:41:24.091660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-30 03:41:24.091676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-30 03:41:24.091692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-30 03:41:24.091703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-30 03:41:24.091712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-30 03:41:24.091722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-30 03:41:24.091731 | orchestrator | 2026-01-30 03:41:24.091794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.091817 | orchestrator | Friday 30 January 2026 03:41:22 +0000 (0:00:00.392) 0:00:05.932 ******** 2026-01-30 03:41:24.091827 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091837 | orchestrator | 2026-01-30 03:41:24.091846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.091856 | orchestrator | Friday 30 January 2026 03:41:22 +0000 (0:00:00.200) 0:00:06.132 ******** 2026-01-30 03:41:24.091866 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091875 | orchestrator | 2026-01-30 03:41:24.091885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.091894 | orchestrator | Friday 30 January 2026 03:41:23 +0000 (0:00:00.205) 0:00:06.338 ******** 2026-01-30 03:41:24.091904 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.091924 | orchestrator | 2026-01-30 03:41:24.091933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.092008 | orchestrator | Friday 30 January 2026 03:41:23 +0000 (0:00:00.192) 0:00:06.530 ******** 2026-01-30 03:41:24.092018 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.092083 | orchestrator | 2026-01-30 03:41:24.092101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.092116 | orchestrator | Friday 30 January 2026 03:41:23 +0000 (0:00:00.200) 0:00:06.731 ******** 2026-01-30 03:41:24.092130 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.092145 | orchestrator | 2026-01-30 03:41:24.092161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.092176 | orchestrator | Friday 30 January 2026 03:41:23 +0000 (0:00:00.195) 0:00:06.927 ******** 2026-01-30 03:41:24.092191 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.092209 | orchestrator | 2026-01-30 03:41:24.092225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:24.092240 | orchestrator | Friday 30 January 2026 03:41:23 +0000 (0:00:00.185) 0:00:07.112 ******** 2026-01-30 03:41:24.092255 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:24.092270 | orchestrator | 2026-01-30 03:41:24.092300 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894223 | orchestrator | Friday 30 January 2026 03:41:24 +0000 (0:00:00.197) 0:00:07.310 ******** 2026-01-30 03:41:31.894331 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894348 | orchestrator | 2026-01-30 03:41:31.894387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894400 | orchestrator | Friday 30 January 2026 03:41:24 +0000 (0:00:00.555) 0:00:07.865 ******** 2026-01-30 03:41:31.894412 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-30 03:41:31.894424 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-30 03:41:31.894435 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-30 03:41:31.894489 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-30 03:41:31.894508 | orchestrator | 2026-01-30 03:41:31.894529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894547 | orchestrator | Friday 30 January 2026 03:41:25 +0000 (0:00:00.639) 0:00:08.505 ******** 2026-01-30 03:41:31.894562 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894572 | orchestrator | 2026-01-30 03:41:31.894584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894595 | orchestrator | Friday 30 January 2026 03:41:25 +0000 (0:00:00.207) 0:00:08.712 ******** 2026-01-30 03:41:31.894606 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894617 | orchestrator | 2026-01-30 03:41:31.894645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894656 | orchestrator | Friday 30 January 2026 03:41:25 +0000 (0:00:00.195) 0:00:08.908 ******** 2026-01-30 03:41:31.894667 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894678 | orchestrator | 2026-01-30 03:41:31.894689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:31.894700 | orchestrator | Friday 30 January 2026 03:41:25 +0000 (0:00:00.193) 0:00:09.102 ******** 2026-01-30 03:41:31.894711 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894721 | orchestrator | 2026-01-30 03:41:31.894732 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-30 03:41:31.894743 | orchestrator | Friday 30 January 2026 03:41:26 +0000 (0:00:00.206) 0:00:09.308 ******** 2026-01-30 03:41:31.894754 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894767 | orchestrator | 2026-01-30 03:41:31.894779 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-30 03:41:31.894791 | orchestrator | Friday 30 January 2026 03:41:26 +0000 (0:00:00.139) 0:00:09.448 ******** 2026-01-30 03:41:31.894804 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}}) 2026-01-30 03:41:31.894839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}}) 2026-01-30 03:41:31.894852 | orchestrator | 2026-01-30 03:41:31.894864 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-30 03:41:31.894878 | orchestrator | Friday 30 January 2026 03:41:26 +0000 (0:00:00.180) 0:00:09.629 ******** 2026-01-30 03:41:31.894891 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}) 2026-01-30 03:41:31.894905 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}) 2026-01-30 03:41:31.894917 | orchestrator | 2026-01-30 03:41:31.894929 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-30 03:41:31.894942 | orchestrator | Friday 30 January 2026 03:41:28 +0000 (0:00:01.959) 0:00:11.588 ******** 2026-01-30 03:41:31.894954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.894968 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.894981 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.894993 | orchestrator | 2026-01-30 03:41:31.895005 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-30 03:41:31.895018 | orchestrator | Friday 30 January 2026 03:41:28 +0000 (0:00:00.147) 0:00:11.735 ******** 2026-01-30 03:41:31.895031 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}) 2026-01-30 03:41:31.895044 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}) 2026-01-30 03:41:31.895056 | orchestrator | 2026-01-30 03:41:31.895069 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-30 03:41:31.895081 | orchestrator | Friday 30 January 2026 03:41:29 +0000 (0:00:01.470) 0:00:13.206 ******** 2026-01-30 03:41:31.895093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895118 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895131 | orchestrator | 2026-01-30 03:41:31.895143 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-30 03:41:31.895157 | orchestrator | Friday 30 January 2026 03:41:30 +0000 (0:00:00.153) 0:00:13.360 ******** 2026-01-30 03:41:31.895188 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895200 | orchestrator | 2026-01-30 03:41:31.895211 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-30 03:41:31.895222 | orchestrator | Friday 30 January 2026 03:41:30 +0000 (0:00:00.311) 0:00:13.671 ******** 2026-01-30 03:41:31.895232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895244 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895255 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895266 | orchestrator | 2026-01-30 03:41:31.895277 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-30 03:41:31.895288 | orchestrator | Friday 30 January 2026 03:41:30 +0000 (0:00:00.146) 0:00:13.818 ******** 2026-01-30 03:41:31.895306 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895317 | orchestrator | 2026-01-30 03:41:31.895328 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-30 03:41:31.895339 | orchestrator | Friday 30 January 2026 03:41:30 +0000 (0:00:00.125) 0:00:13.944 ******** 2026-01-30 03:41:31.895356 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895367 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895378 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895390 | orchestrator | 2026-01-30 03:41:31.895400 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-30 03:41:31.895411 | orchestrator | Friday 30 January 2026 03:41:30 +0000 (0:00:00.149) 0:00:14.093 ******** 2026-01-30 03:41:31.895422 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895433 | orchestrator | 2026-01-30 03:41:31.895471 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-30 03:41:31.895483 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.150) 0:00:14.243 ******** 2026-01-30 03:41:31.895494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895527 | orchestrator | 2026-01-30 03:41:31.895538 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-30 03:41:31.895549 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.160) 0:00:14.404 ******** 2026-01-30 03:41:31.895560 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:31.895571 | orchestrator | 2026-01-30 03:41:31.895582 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-30 03:41:31.895593 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.136) 0:00:14.540 ******** 2026-01-30 03:41:31.895604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895629 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895648 | orchestrator | 2026-01-30 03:41:31.895665 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-30 03:41:31.895682 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.150) 0:00:14.691 ******** 2026-01-30 03:41:31.895698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895733 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895750 | orchestrator | 2026-01-30 03:41:31.895766 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-30 03:41:31.895783 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.146) 0:00:14.838 ******** 2026-01-30 03:41:31.895799 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:31.895818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:31.895848 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895866 | orchestrator | 2026-01-30 03:41:31.895883 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-30 03:41:31.895903 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.155) 0:00:14.993 ******** 2026-01-30 03:41:31.895921 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:31.895939 | orchestrator | 2026-01-30 03:41:31.895958 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-30 03:41:31.895988 | orchestrator | Friday 30 January 2026 03:41:31 +0000 (0:00:00.125) 0:00:15.119 ******** 2026-01-30 03:41:38.188829 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.188916 | orchestrator | 2026-01-30 03:41:38.188927 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-30 03:41:38.188936 | orchestrator | Friday 30 January 2026 03:41:32 +0000 (0:00:00.124) 0:00:15.243 ******** 2026-01-30 03:41:38.188944 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.188952 | orchestrator | 2026-01-30 03:41:38.188960 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-30 03:41:38.188968 | orchestrator | Friday 30 January 2026 03:41:32 +0000 (0:00:00.302) 0:00:15.546 ******** 2026-01-30 03:41:38.188975 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:41:38.188983 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-30 03:41:38.188991 | orchestrator | } 2026-01-30 03:41:38.188999 | orchestrator | 2026-01-30 03:41:38.189006 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-30 03:41:38.189014 | orchestrator | Friday 30 January 2026 03:41:32 +0000 (0:00:00.148) 0:00:15.695 ******** 2026-01-30 03:41:38.189021 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:41:38.189029 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-30 03:41:38.189036 | orchestrator | } 2026-01-30 03:41:38.189043 | orchestrator | 2026-01-30 03:41:38.189050 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-30 03:41:38.189072 | orchestrator | Friday 30 January 2026 03:41:32 +0000 (0:00:00.135) 0:00:15.830 ******** 2026-01-30 03:41:38.189080 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:41:38.189088 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-30 03:41:38.189095 | orchestrator | } 2026-01-30 03:41:38.189103 | orchestrator | 2026-01-30 03:41:38.189110 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-30 03:41:38.189118 | orchestrator | Friday 30 January 2026 03:41:32 +0000 (0:00:00.142) 0:00:15.973 ******** 2026-01-30 03:41:38.189125 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:38.189133 | orchestrator | 2026-01-30 03:41:38.189140 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-30 03:41:38.189148 | orchestrator | Friday 30 January 2026 03:41:33 +0000 (0:00:00.663) 0:00:16.636 ******** 2026-01-30 03:41:38.189155 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:38.189163 | orchestrator | 2026-01-30 03:41:38.189170 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-30 03:41:38.189229 | orchestrator | Friday 30 January 2026 03:41:33 +0000 (0:00:00.530) 0:00:17.166 ******** 2026-01-30 03:41:38.189237 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:38.189244 | orchestrator | 2026-01-30 03:41:38.189251 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-30 03:41:38.189259 | orchestrator | Friday 30 January 2026 03:41:34 +0000 (0:00:00.509) 0:00:17.676 ******** 2026-01-30 03:41:38.189266 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:38.189273 | orchestrator | 2026-01-30 03:41:38.189281 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-30 03:41:38.189288 | orchestrator | Friday 30 January 2026 03:41:34 +0000 (0:00:00.154) 0:00:17.830 ******** 2026-01-30 03:41:38.189295 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189303 | orchestrator | 2026-01-30 03:41:38.189310 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-30 03:41:38.189336 | orchestrator | Friday 30 January 2026 03:41:34 +0000 (0:00:00.110) 0:00:17.941 ******** 2026-01-30 03:41:38.189344 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189352 | orchestrator | 2026-01-30 03:41:38.189359 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-30 03:41:38.189366 | orchestrator | Friday 30 January 2026 03:41:34 +0000 (0:00:00.122) 0:00:18.063 ******** 2026-01-30 03:41:38.189374 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:41:38.189381 | orchestrator |  "vgs_report": { 2026-01-30 03:41:38.189391 | orchestrator |  "vg": [] 2026-01-30 03:41:38.189403 | orchestrator |  } 2026-01-30 03:41:38.189415 | orchestrator | } 2026-01-30 03:41:38.189427 | orchestrator | 2026-01-30 03:41:38.189466 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-30 03:41:38.189475 | orchestrator | Friday 30 January 2026 03:41:34 +0000 (0:00:00.147) 0:00:18.211 ******** 2026-01-30 03:41:38.189484 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189496 | orchestrator | 2026-01-30 03:41:38.189509 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-30 03:41:38.189520 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.138) 0:00:18.349 ******** 2026-01-30 03:41:38.189528 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189535 | orchestrator | 2026-01-30 03:41:38.189543 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-30 03:41:38.189550 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.318) 0:00:18.667 ******** 2026-01-30 03:41:38.189557 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189564 | orchestrator | 2026-01-30 03:41:38.189572 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-30 03:41:38.189579 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.130) 0:00:18.798 ******** 2026-01-30 03:41:38.189586 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189593 | orchestrator | 2026-01-30 03:41:38.189601 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-30 03:41:38.189608 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.134) 0:00:18.933 ******** 2026-01-30 03:41:38.189615 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189622 | orchestrator | 2026-01-30 03:41:38.189630 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-30 03:41:38.189637 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.143) 0:00:19.076 ******** 2026-01-30 03:41:38.189644 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189651 | orchestrator | 2026-01-30 03:41:38.189659 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-30 03:41:38.189666 | orchestrator | Friday 30 January 2026 03:41:35 +0000 (0:00:00.130) 0:00:19.207 ******** 2026-01-30 03:41:38.189673 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189681 | orchestrator | 2026-01-30 03:41:38.189693 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-30 03:41:38.189705 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.137) 0:00:19.345 ******** 2026-01-30 03:41:38.189736 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189745 | orchestrator | 2026-01-30 03:41:38.189752 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-30 03:41:38.189760 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.137) 0:00:19.482 ******** 2026-01-30 03:41:38.189767 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189775 | orchestrator | 2026-01-30 03:41:38.189782 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-30 03:41:38.189789 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.133) 0:00:19.615 ******** 2026-01-30 03:41:38.189797 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189804 | orchestrator | 2026-01-30 03:41:38.189811 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-30 03:41:38.189819 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.132) 0:00:19.748 ******** 2026-01-30 03:41:38.189833 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189840 | orchestrator | 2026-01-30 03:41:38.189847 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-30 03:41:38.189855 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.132) 0:00:19.880 ******** 2026-01-30 03:41:38.189862 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189869 | orchestrator | 2026-01-30 03:41:38.189882 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-30 03:41:38.189890 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.136) 0:00:20.017 ******** 2026-01-30 03:41:38.189898 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189905 | orchestrator | 2026-01-30 03:41:38.189912 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-30 03:41:38.189919 | orchestrator | Friday 30 January 2026 03:41:36 +0000 (0:00:00.122) 0:00:20.140 ******** 2026-01-30 03:41:38.189927 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189934 | orchestrator | 2026-01-30 03:41:38.189941 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-30 03:41:38.189949 | orchestrator | Friday 30 January 2026 03:41:37 +0000 (0:00:00.363) 0:00:20.504 ******** 2026-01-30 03:41:38.189957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:38.189969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:38.189983 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.189995 | orchestrator | 2026-01-30 03:41:38.190008 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-30 03:41:38.190070 | orchestrator | Friday 30 January 2026 03:41:37 +0000 (0:00:00.147) 0:00:20.652 ******** 2026-01-30 03:41:38.190079 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:38.190086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:38.190094 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.190101 | orchestrator | 2026-01-30 03:41:38.190108 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-30 03:41:38.190116 | orchestrator | Friday 30 January 2026 03:41:37 +0000 (0:00:00.152) 0:00:20.804 ******** 2026-01-30 03:41:38.190123 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:38.190130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:38.190138 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.190145 | orchestrator | 2026-01-30 03:41:38.190152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-30 03:41:38.190160 | orchestrator | Friday 30 January 2026 03:41:37 +0000 (0:00:00.150) 0:00:20.955 ******** 2026-01-30 03:41:38.190167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:38.190174 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:38.190182 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.190189 | orchestrator | 2026-01-30 03:41:38.190196 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-30 03:41:38.190204 | orchestrator | Friday 30 January 2026 03:41:37 +0000 (0:00:00.149) 0:00:21.105 ******** 2026-01-30 03:41:38.190217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:38.190224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:38.190231 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:38.190239 | orchestrator | 2026-01-30 03:41:38.190246 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-30 03:41:38.190253 | orchestrator | Friday 30 January 2026 03:41:38 +0000 (0:00:00.153) 0:00:21.258 ******** 2026-01-30 03:41:38.190267 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.201333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.201525 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.201561 | orchestrator | 2026-01-30 03:41:43.201585 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-30 03:41:43.201599 | orchestrator | Friday 30 January 2026 03:41:38 +0000 (0:00:00.156) 0:00:21.415 ******** 2026-01-30 03:41:43.201611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.201623 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.201634 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.201645 | orchestrator | 2026-01-30 03:41:43.201673 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-30 03:41:43.201685 | orchestrator | Friday 30 January 2026 03:41:38 +0000 (0:00:00.152) 0:00:21.568 ******** 2026-01-30 03:41:43.201696 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.201707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.201718 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.201729 | orchestrator | 2026-01-30 03:41:43.201739 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-30 03:41:43.201750 | orchestrator | Friday 30 January 2026 03:41:38 +0000 (0:00:00.150) 0:00:21.718 ******** 2026-01-30 03:41:43.201761 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:43.201773 | orchestrator | 2026-01-30 03:41:43.201784 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-30 03:41:43.201795 | orchestrator | Friday 30 January 2026 03:41:39 +0000 (0:00:00.530) 0:00:22.249 ******** 2026-01-30 03:41:43.201806 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:43.201816 | orchestrator | 2026-01-30 03:41:43.201827 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-30 03:41:43.201838 | orchestrator | Friday 30 January 2026 03:41:39 +0000 (0:00:00.541) 0:00:22.791 ******** 2026-01-30 03:41:43.201849 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:41:43.201860 | orchestrator | 2026-01-30 03:41:43.201873 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-30 03:41:43.201886 | orchestrator | Friday 30 January 2026 03:41:39 +0000 (0:00:00.143) 0:00:22.935 ******** 2026-01-30 03:41:43.201899 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'vg_name': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}) 2026-01-30 03:41:43.201913 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'vg_name': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}) 2026-01-30 03:41:43.201947 | orchestrator | 2026-01-30 03:41:43.201960 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-30 03:41:43.201973 | orchestrator | Friday 30 January 2026 03:41:39 +0000 (0:00:00.162) 0:00:23.097 ******** 2026-01-30 03:41:43.201985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.201997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.202010 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.202082 | orchestrator | 2026-01-30 03:41:43.202095 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-30 03:41:43.202108 | orchestrator | Friday 30 January 2026 03:41:40 +0000 (0:00:00.329) 0:00:23.427 ******** 2026-01-30 03:41:43.202121 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.202134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.202146 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.202194 | orchestrator | 2026-01-30 03:41:43.202208 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-30 03:41:43.202220 | orchestrator | Friday 30 January 2026 03:41:40 +0000 (0:00:00.158) 0:00:23.586 ******** 2026-01-30 03:41:43.202232 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 03:41:43.202245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 03:41:43.202257 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:41:43.202269 | orchestrator | 2026-01-30 03:41:43.202282 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-30 03:41:43.202293 | orchestrator | Friday 30 January 2026 03:41:40 +0000 (0:00:00.155) 0:00:23.741 ******** 2026-01-30 03:41:43.202322 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 03:41:43.202334 | orchestrator |  "lvm_report": { 2026-01-30 03:41:43.202346 | orchestrator |  "lv": [ 2026-01-30 03:41:43.202357 | orchestrator |  { 2026-01-30 03:41:43.202369 | orchestrator |  "lv_name": "osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0", 2026-01-30 03:41:43.202380 | orchestrator |  "vg_name": "ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0" 2026-01-30 03:41:43.202391 | orchestrator |  }, 2026-01-30 03:41:43.202403 | orchestrator |  { 2026-01-30 03:41:43.202414 | orchestrator |  "lv_name": "osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b", 2026-01-30 03:41:43.202425 | orchestrator |  "vg_name": "ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b" 2026-01-30 03:41:43.202492 | orchestrator |  } 2026-01-30 03:41:43.202506 | orchestrator |  ], 2026-01-30 03:41:43.202517 | orchestrator |  "pv": [ 2026-01-30 03:41:43.202528 | orchestrator |  { 2026-01-30 03:41:43.202539 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-30 03:41:43.202550 | orchestrator |  "vg_name": "ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0" 2026-01-30 03:41:43.202561 | orchestrator |  }, 2026-01-30 03:41:43.202572 | orchestrator |  { 2026-01-30 03:41:43.202590 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-30 03:41:43.202601 | orchestrator |  "vg_name": "ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b" 2026-01-30 03:41:43.202612 | orchestrator |  } 2026-01-30 03:41:43.202623 | orchestrator |  ] 2026-01-30 03:41:43.202634 | orchestrator |  } 2026-01-30 03:41:43.202646 | orchestrator | } 2026-01-30 03:41:43.202671 | orchestrator | 2026-01-30 03:41:43.202682 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-30 03:41:43.202693 | orchestrator | 2026-01-30 03:41:43.202704 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:41:43.202715 | orchestrator | Friday 30 January 2026 03:41:40 +0000 (0:00:00.289) 0:00:24.031 ******** 2026-01-30 03:41:43.202726 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-30 03:41:43.202737 | orchestrator | 2026-01-30 03:41:43.202748 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:41:43.202760 | orchestrator | Friday 30 January 2026 03:41:41 +0000 (0:00:00.251) 0:00:24.282 ******** 2026-01-30 03:41:43.202771 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:41:43.202781 | orchestrator | 2026-01-30 03:41:43.202792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.202803 | orchestrator | Friday 30 January 2026 03:41:41 +0000 (0:00:00.219) 0:00:24.501 ******** 2026-01-30 03:41:43.202814 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-30 03:41:43.202825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-30 03:41:43.202836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-30 03:41:43.202847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-30 03:41:43.202858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-30 03:41:43.202868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-30 03:41:43.202879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-30 03:41:43.202890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-30 03:41:43.202901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-30 03:41:43.202911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-30 03:41:43.202922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-30 03:41:43.202933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-30 03:41:43.202944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-30 03:41:43.202954 | orchestrator | 2026-01-30 03:41:43.202965 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.202976 | orchestrator | Friday 30 January 2026 03:41:41 +0000 (0:00:00.385) 0:00:24.887 ******** 2026-01-30 03:41:43.202987 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.202998 | orchestrator | 2026-01-30 03:41:43.203009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.203019 | orchestrator | Friday 30 January 2026 03:41:41 +0000 (0:00:00.197) 0:00:25.085 ******** 2026-01-30 03:41:43.203030 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.203041 | orchestrator | 2026-01-30 03:41:43.203052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.203063 | orchestrator | Friday 30 January 2026 03:41:42 +0000 (0:00:00.540) 0:00:25.625 ******** 2026-01-30 03:41:43.203074 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.203085 | orchestrator | 2026-01-30 03:41:43.203096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.203107 | orchestrator | Friday 30 January 2026 03:41:42 +0000 (0:00:00.206) 0:00:25.831 ******** 2026-01-30 03:41:43.203117 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.203128 | orchestrator | 2026-01-30 03:41:43.203139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.203150 | orchestrator | Friday 30 January 2026 03:41:42 +0000 (0:00:00.203) 0:00:26.035 ******** 2026-01-30 03:41:43.203168 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.203179 | orchestrator | 2026-01-30 03:41:43.203190 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:43.203201 | orchestrator | Friday 30 January 2026 03:41:42 +0000 (0:00:00.192) 0:00:26.228 ******** 2026-01-30 03:41:43.203212 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:43.203223 | orchestrator | 2026-01-30 03:41:43.203241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295135 | orchestrator | Friday 30 January 2026 03:41:43 +0000 (0:00:00.198) 0:00:26.426 ******** 2026-01-30 03:41:54.295219 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295228 | orchestrator | 2026-01-30 03:41:54.295235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295241 | orchestrator | Friday 30 January 2026 03:41:43 +0000 (0:00:00.203) 0:00:26.630 ******** 2026-01-30 03:41:54.295247 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295252 | orchestrator | 2026-01-30 03:41:54.295258 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295263 | orchestrator | Friday 30 January 2026 03:41:43 +0000 (0:00:00.196) 0:00:26.827 ******** 2026-01-30 03:41:54.295268 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb) 2026-01-30 03:41:54.295275 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb) 2026-01-30 03:41:54.295280 | orchestrator | 2026-01-30 03:41:54.295298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295304 | orchestrator | Friday 30 January 2026 03:41:44 +0000 (0:00:00.414) 0:00:27.241 ******** 2026-01-30 03:41:54.295309 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea) 2026-01-30 03:41:54.295314 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea) 2026-01-30 03:41:54.295319 | orchestrator | 2026-01-30 03:41:54.295324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295330 | orchestrator | Friday 30 January 2026 03:41:44 +0000 (0:00:00.482) 0:00:27.723 ******** 2026-01-30 03:41:54.295335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4) 2026-01-30 03:41:54.295342 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4) 2026-01-30 03:41:54.295351 | orchestrator | 2026-01-30 03:41:54.295359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295367 | orchestrator | Friday 30 January 2026 03:41:45 +0000 (0:00:00.648) 0:00:28.371 ******** 2026-01-30 03:41:54.295375 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c) 2026-01-30 03:41:54.295383 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c) 2026-01-30 03:41:54.295391 | orchestrator | 2026-01-30 03:41:54.295399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:41:54.295407 | orchestrator | Friday 30 January 2026 03:41:46 +0000 (0:00:01.047) 0:00:29.419 ******** 2026-01-30 03:41:54.295414 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:41:54.295422 | orchestrator | 2026-01-30 03:41:54.295471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295483 | orchestrator | Friday 30 January 2026 03:41:46 +0000 (0:00:00.345) 0:00:29.764 ******** 2026-01-30 03:41:54.295491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-30 03:41:54.295501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-30 03:41:54.295509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-30 03:41:54.295538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-30 03:41:54.295547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-30 03:41:54.295556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-30 03:41:54.295564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-30 03:41:54.295572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-30 03:41:54.295580 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-30 03:41:54.295590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-30 03:41:54.295598 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-30 03:41:54.295603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-30 03:41:54.295609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-30 03:41:54.295614 | orchestrator | 2026-01-30 03:41:54.295619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295624 | orchestrator | Friday 30 January 2026 03:41:46 +0000 (0:00:00.432) 0:00:30.197 ******** 2026-01-30 03:41:54.295629 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295634 | orchestrator | 2026-01-30 03:41:54.295639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295644 | orchestrator | Friday 30 January 2026 03:41:47 +0000 (0:00:00.214) 0:00:30.412 ******** 2026-01-30 03:41:54.295649 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295654 | orchestrator | 2026-01-30 03:41:54.295659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295665 | orchestrator | Friday 30 January 2026 03:41:47 +0000 (0:00:00.204) 0:00:30.616 ******** 2026-01-30 03:41:54.295670 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295675 | orchestrator | 2026-01-30 03:41:54.295693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295700 | orchestrator | Friday 30 January 2026 03:41:47 +0000 (0:00:00.195) 0:00:30.811 ******** 2026-01-30 03:41:54.295706 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295712 | orchestrator | 2026-01-30 03:41:54.295718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295724 | orchestrator | Friday 30 January 2026 03:41:47 +0000 (0:00:00.202) 0:00:31.014 ******** 2026-01-30 03:41:54.295730 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295735 | orchestrator | 2026-01-30 03:41:54.295741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295748 | orchestrator | Friday 30 January 2026 03:41:47 +0000 (0:00:00.191) 0:00:31.206 ******** 2026-01-30 03:41:54.295753 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295759 | orchestrator | 2026-01-30 03:41:54.295765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295771 | orchestrator | Friday 30 January 2026 03:41:48 +0000 (0:00:00.230) 0:00:31.436 ******** 2026-01-30 03:41:54.295783 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295789 | orchestrator | 2026-01-30 03:41:54.295795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295800 | orchestrator | Friday 30 January 2026 03:41:48 +0000 (0:00:00.209) 0:00:31.646 ******** 2026-01-30 03:41:54.295806 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295812 | orchestrator | 2026-01-30 03:41:54.295818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295824 | orchestrator | Friday 30 January 2026 03:41:48 +0000 (0:00:00.583) 0:00:32.229 ******** 2026-01-30 03:41:54.295830 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-30 03:41:54.295841 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-30 03:41:54.295847 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-30 03:41:54.295853 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-30 03:41:54.295858 | orchestrator | 2026-01-30 03:41:54.295864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295870 | orchestrator | Friday 30 January 2026 03:41:49 +0000 (0:00:00.679) 0:00:32.909 ******** 2026-01-30 03:41:54.295876 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295882 | orchestrator | 2026-01-30 03:41:54.295887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295893 | orchestrator | Friday 30 January 2026 03:41:49 +0000 (0:00:00.201) 0:00:33.111 ******** 2026-01-30 03:41:54.295899 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295905 | orchestrator | 2026-01-30 03:41:54.295910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295916 | orchestrator | Friday 30 January 2026 03:41:50 +0000 (0:00:00.213) 0:00:33.325 ******** 2026-01-30 03:41:54.295922 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295928 | orchestrator | 2026-01-30 03:41:54.295933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:41:54.295939 | orchestrator | Friday 30 January 2026 03:41:50 +0000 (0:00:00.225) 0:00:33.551 ******** 2026-01-30 03:41:54.295945 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295950 | orchestrator | 2026-01-30 03:41:54.295956 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-30 03:41:54.295962 | orchestrator | Friday 30 January 2026 03:41:50 +0000 (0:00:00.214) 0:00:33.765 ******** 2026-01-30 03:41:54.295968 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.295974 | orchestrator | 2026-01-30 03:41:54.295979 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-30 03:41:54.295985 | orchestrator | Friday 30 January 2026 03:41:50 +0000 (0:00:00.136) 0:00:33.902 ******** 2026-01-30 03:41:54.295991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}}) 2026-01-30 03:41:54.295997 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a1704272-fd93-5be5-acd9-a48498ed5939'}}) 2026-01-30 03:41:54.296003 | orchestrator | 2026-01-30 03:41:54.296009 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-30 03:41:54.296015 | orchestrator | Friday 30 January 2026 03:41:50 +0000 (0:00:00.197) 0:00:34.099 ******** 2026-01-30 03:41:54.296021 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}) 2026-01-30 03:41:54.296028 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}) 2026-01-30 03:41:54.296033 | orchestrator | 2026-01-30 03:41:54.296039 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-30 03:41:54.296045 | orchestrator | Friday 30 January 2026 03:41:52 +0000 (0:00:01.844) 0:00:35.944 ******** 2026-01-30 03:41:54.296051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:41:54.296059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:41:54.296064 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:41:54.296069 | orchestrator | 2026-01-30 03:41:54.296074 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-30 03:41:54.296079 | orchestrator | Friday 30 January 2026 03:41:52 +0000 (0:00:00.152) 0:00:36.096 ******** 2026-01-30 03:41:54.296084 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}) 2026-01-30 03:41:54.296096 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}) 2026-01-30 03:42:00.064398 | orchestrator | 2026-01-30 03:42:00.064656 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-30 03:42:00.064679 | orchestrator | Friday 30 January 2026 03:41:54 +0000 (0:00:01.417) 0:00:37.514 ******** 2026-01-30 03:42:00.064690 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.064702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.064711 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.064721 | orchestrator | 2026-01-30 03:42:00.064747 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-30 03:42:00.064756 | orchestrator | Friday 30 January 2026 03:41:54 +0000 (0:00:00.343) 0:00:37.858 ******** 2026-01-30 03:42:00.064765 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.064775 | orchestrator | 2026-01-30 03:42:00.064783 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-30 03:42:00.064792 | orchestrator | Friday 30 January 2026 03:41:54 +0000 (0:00:00.144) 0:00:38.002 ******** 2026-01-30 03:42:00.064801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.064810 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.064819 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.064828 | orchestrator | 2026-01-30 03:42:00.064837 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-30 03:42:00.064846 | orchestrator | Friday 30 January 2026 03:41:54 +0000 (0:00:00.158) 0:00:38.161 ******** 2026-01-30 03:42:00.064855 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.064863 | orchestrator | 2026-01-30 03:42:00.064877 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-30 03:42:00.064893 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.154) 0:00:38.315 ******** 2026-01-30 03:42:00.064908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.064923 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.064937 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.064952 | orchestrator | 2026-01-30 03:42:00.064968 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-30 03:42:00.064985 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.160) 0:00:38.476 ******** 2026-01-30 03:42:00.065000 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065017 | orchestrator | 2026-01-30 03:42:00.065033 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-30 03:42:00.065048 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.151) 0:00:38.627 ******** 2026-01-30 03:42:00.065064 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.065077 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.065088 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065099 | orchestrator | 2026-01-30 03:42:00.065109 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-30 03:42:00.065139 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.156) 0:00:38.783 ******** 2026-01-30 03:42:00.065150 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:00.065162 | orchestrator | 2026-01-30 03:42:00.065172 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-30 03:42:00.065183 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.129) 0:00:38.912 ******** 2026-01-30 03:42:00.065194 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.065204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.065215 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065226 | orchestrator | 2026-01-30 03:42:00.065236 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-30 03:42:00.065246 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.141) 0:00:39.054 ******** 2026-01-30 03:42:00.065257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.065267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.065278 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065289 | orchestrator | 2026-01-30 03:42:00.065300 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-30 03:42:00.065339 | orchestrator | Friday 30 January 2026 03:41:55 +0000 (0:00:00.144) 0:00:39.199 ******** 2026-01-30 03:42:00.065349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:00.065358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:00.065367 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065376 | orchestrator | 2026-01-30 03:42:00.065385 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-30 03:42:00.065394 | orchestrator | Friday 30 January 2026 03:41:56 +0000 (0:00:00.156) 0:00:39.356 ******** 2026-01-30 03:42:00.065409 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065418 | orchestrator | 2026-01-30 03:42:00.065447 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-30 03:42:00.065458 | orchestrator | Friday 30 January 2026 03:41:56 +0000 (0:00:00.318) 0:00:39.674 ******** 2026-01-30 03:42:00.065467 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065475 | orchestrator | 2026-01-30 03:42:00.065484 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-30 03:42:00.065493 | orchestrator | Friday 30 January 2026 03:41:56 +0000 (0:00:00.143) 0:00:39.817 ******** 2026-01-30 03:42:00.065502 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065511 | orchestrator | 2026-01-30 03:42:00.065520 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-30 03:42:00.065529 | orchestrator | Friday 30 January 2026 03:41:56 +0000 (0:00:00.140) 0:00:39.958 ******** 2026-01-30 03:42:00.065538 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:42:00.065547 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-30 03:42:00.065556 | orchestrator | } 2026-01-30 03:42:00.065565 | orchestrator | 2026-01-30 03:42:00.065573 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-30 03:42:00.065588 | orchestrator | Friday 30 January 2026 03:41:56 +0000 (0:00:00.149) 0:00:40.107 ******** 2026-01-30 03:42:00.065602 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:42:00.065615 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-30 03:42:00.065641 | orchestrator | } 2026-01-30 03:42:00.065656 | orchestrator | 2026-01-30 03:42:00.065670 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-30 03:42:00.065685 | orchestrator | Friday 30 January 2026 03:41:57 +0000 (0:00:00.139) 0:00:40.247 ******** 2026-01-30 03:42:00.065700 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:42:00.065714 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-30 03:42:00.065723 | orchestrator | } 2026-01-30 03:42:00.065732 | orchestrator | 2026-01-30 03:42:00.065741 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-30 03:42:00.065750 | orchestrator | Friday 30 January 2026 03:41:57 +0000 (0:00:00.138) 0:00:40.385 ******** 2026-01-30 03:42:00.065759 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:00.065768 | orchestrator | 2026-01-30 03:42:00.065776 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-30 03:42:00.065785 | orchestrator | Friday 30 January 2026 03:41:57 +0000 (0:00:00.529) 0:00:40.914 ******** 2026-01-30 03:42:00.065794 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:00.065802 | orchestrator | 2026-01-30 03:42:00.065811 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-30 03:42:00.065820 | orchestrator | Friday 30 January 2026 03:41:58 +0000 (0:00:00.535) 0:00:41.450 ******** 2026-01-30 03:42:00.065829 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:00.065837 | orchestrator | 2026-01-30 03:42:00.065846 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-30 03:42:00.065855 | orchestrator | Friday 30 January 2026 03:41:58 +0000 (0:00:00.594) 0:00:42.044 ******** 2026-01-30 03:42:00.065864 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:00.065872 | orchestrator | 2026-01-30 03:42:00.065881 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-30 03:42:00.065890 | orchestrator | Friday 30 January 2026 03:41:58 +0000 (0:00:00.148) 0:00:42.193 ******** 2026-01-30 03:42:00.065899 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065907 | orchestrator | 2026-01-30 03:42:00.065916 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-30 03:42:00.065925 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.125) 0:00:42.318 ******** 2026-01-30 03:42:00.065934 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.065943 | orchestrator | 2026-01-30 03:42:00.065951 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-30 03:42:00.065960 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.284) 0:00:42.603 ******** 2026-01-30 03:42:00.065969 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:42:00.065978 | orchestrator |  "vgs_report": { 2026-01-30 03:42:00.065988 | orchestrator |  "vg": [] 2026-01-30 03:42:00.065996 | orchestrator |  } 2026-01-30 03:42:00.066005 | orchestrator | } 2026-01-30 03:42:00.066076 | orchestrator | 2026-01-30 03:42:00.066089 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-30 03:42:00.066098 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.136) 0:00:42.740 ******** 2026-01-30 03:42:00.066107 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.066116 | orchestrator | 2026-01-30 03:42:00.066124 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-30 03:42:00.066133 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.133) 0:00:42.873 ******** 2026-01-30 03:42:00.066142 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.066151 | orchestrator | 2026-01-30 03:42:00.066160 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-30 03:42:00.066169 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.140) 0:00:43.013 ******** 2026-01-30 03:42:00.066177 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.066186 | orchestrator | 2026-01-30 03:42:00.066195 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-30 03:42:00.066204 | orchestrator | Friday 30 January 2026 03:41:59 +0000 (0:00:00.134) 0:00:43.148 ******** 2026-01-30 03:42:00.066220 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:00.066229 | orchestrator | 2026-01-30 03:42:00.066248 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-30 03:42:04.549535 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.137) 0:00:43.285 ******** 2026-01-30 03:42:04.549643 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549658 | orchestrator | 2026-01-30 03:42:04.549669 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-30 03:42:04.549679 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.139) 0:00:43.425 ******** 2026-01-30 03:42:04.549688 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549698 | orchestrator | 2026-01-30 03:42:04.549708 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-30 03:42:04.549718 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.145) 0:00:43.571 ******** 2026-01-30 03:42:04.549728 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549737 | orchestrator | 2026-01-30 03:42:04.549765 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-30 03:42:04.549776 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.129) 0:00:43.700 ******** 2026-01-30 03:42:04.549786 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549796 | orchestrator | 2026-01-30 03:42:04.549805 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-30 03:42:04.549815 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.127) 0:00:43.827 ******** 2026-01-30 03:42:04.549825 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549836 | orchestrator | 2026-01-30 03:42:04.549845 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-30 03:42:04.549855 | orchestrator | Friday 30 January 2026 03:42:00 +0000 (0:00:00.124) 0:00:43.952 ******** 2026-01-30 03:42:04.549865 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549875 | orchestrator | 2026-01-30 03:42:04.549884 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-30 03:42:04.549895 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.305) 0:00:44.257 ******** 2026-01-30 03:42:04.549904 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549915 | orchestrator | 2026-01-30 03:42:04.549924 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-30 03:42:04.549934 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.127) 0:00:44.384 ******** 2026-01-30 03:42:04.549944 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549953 | orchestrator | 2026-01-30 03:42:04.549962 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-30 03:42:04.549972 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.127) 0:00:44.511 ******** 2026-01-30 03:42:04.549982 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.549991 | orchestrator | 2026-01-30 03:42:04.550001 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-30 03:42:04.550011 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.124) 0:00:44.635 ******** 2026-01-30 03:42:04.550081 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550091 | orchestrator | 2026-01-30 03:42:04.550101 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-30 03:42:04.550111 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.123) 0:00:44.759 ******** 2026-01-30 03:42:04.550123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550145 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550156 | orchestrator | 2026-01-30 03:42:04.550166 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-30 03:42:04.550204 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.143) 0:00:44.903 ******** 2026-01-30 03:42:04.550215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550225 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550235 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550244 | orchestrator | 2026-01-30 03:42:04.550253 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-30 03:42:04.550263 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.148) 0:00:45.052 ******** 2026-01-30 03:42:04.550272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550292 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550302 | orchestrator | 2026-01-30 03:42:04.550311 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-30 03:42:04.550321 | orchestrator | Friday 30 January 2026 03:42:01 +0000 (0:00:00.146) 0:00:45.198 ******** 2026-01-30 03:42:04.550331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550341 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550351 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550360 | orchestrator | 2026-01-30 03:42:04.550389 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-30 03:42:04.550400 | orchestrator | Friday 30 January 2026 03:42:02 +0000 (0:00:00.140) 0:00:45.339 ******** 2026-01-30 03:42:04.550411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550486 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550496 | orchestrator | 2026-01-30 03:42:04.550514 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-30 03:42:04.550524 | orchestrator | Friday 30 January 2026 03:42:02 +0000 (0:00:00.138) 0:00:45.478 ******** 2026-01-30 03:42:04.550534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550544 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550554 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550563 | orchestrator | 2026-01-30 03:42:04.550573 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-30 03:42:04.550583 | orchestrator | Friday 30 January 2026 03:42:02 +0000 (0:00:00.140) 0:00:45.619 ******** 2026-01-30 03:42:04.550592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550611 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550631 | orchestrator | 2026-01-30 03:42:04.550642 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-30 03:42:04.550651 | orchestrator | Friday 30 January 2026 03:42:02 +0000 (0:00:00.308) 0:00:45.927 ******** 2026-01-30 03:42:04.550661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550680 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550690 | orchestrator | 2026-01-30 03:42:04.550699 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-30 03:42:04.550709 | orchestrator | Friday 30 January 2026 03:42:02 +0000 (0:00:00.150) 0:00:46.078 ******** 2026-01-30 03:42:04.550719 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:04.550729 | orchestrator | 2026-01-30 03:42:04.550739 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-30 03:42:04.550749 | orchestrator | Friday 30 January 2026 03:42:03 +0000 (0:00:00.533) 0:00:46.611 ******** 2026-01-30 03:42:04.550758 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:04.550768 | orchestrator | 2026-01-30 03:42:04.550778 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-30 03:42:04.550788 | orchestrator | Friday 30 January 2026 03:42:03 +0000 (0:00:00.519) 0:00:47.130 ******** 2026-01-30 03:42:04.550798 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:04.550808 | orchestrator | 2026-01-30 03:42:04.550817 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-30 03:42:04.550827 | orchestrator | Friday 30 January 2026 03:42:04 +0000 (0:00:00.141) 0:00:47.272 ******** 2026-01-30 03:42:04.550836 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'vg_name': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}) 2026-01-30 03:42:04.550848 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'vg_name': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}) 2026-01-30 03:42:04.550857 | orchestrator | 2026-01-30 03:42:04.550867 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-30 03:42:04.550877 | orchestrator | Friday 30 January 2026 03:42:04 +0000 (0:00:00.180) 0:00:47.453 ******** 2026-01-30 03:42:04.550886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:04.550906 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:04.550915 | orchestrator | 2026-01-30 03:42:04.550925 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-30 03:42:04.550935 | orchestrator | Friday 30 January 2026 03:42:04 +0000 (0:00:00.159) 0:00:47.612 ******** 2026-01-30 03:42:04.550945 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:04.550965 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:10.671645 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:10.671747 | orchestrator | 2026-01-30 03:42:10.671759 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-30 03:42:10.671769 | orchestrator | Friday 30 January 2026 03:42:04 +0000 (0:00:00.163) 0:00:47.775 ******** 2026-01-30 03:42:10.671777 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 03:42:10.671815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 03:42:10.671825 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:10.671871 | orchestrator | 2026-01-30 03:42:10.671880 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-30 03:42:10.671888 | orchestrator | Friday 30 January 2026 03:42:04 +0000 (0:00:00.185) 0:00:47.961 ******** 2026-01-30 03:42:10.671896 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 03:42:10.671903 | orchestrator |  "lvm_report": { 2026-01-30 03:42:10.671912 | orchestrator |  "lv": [ 2026-01-30 03:42:10.671920 | orchestrator |  { 2026-01-30 03:42:10.671927 | orchestrator |  "lv_name": "osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267", 2026-01-30 03:42:10.671935 | orchestrator |  "vg_name": "ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267" 2026-01-30 03:42:10.671942 | orchestrator |  }, 2026-01-30 03:42:10.671950 | orchestrator |  { 2026-01-30 03:42:10.671957 | orchestrator |  "lv_name": "osd-block-a1704272-fd93-5be5-acd9-a48498ed5939", 2026-01-30 03:42:10.671964 | orchestrator |  "vg_name": "ceph-a1704272-fd93-5be5-acd9-a48498ed5939" 2026-01-30 03:42:10.671971 | orchestrator |  } 2026-01-30 03:42:10.671979 | orchestrator |  ], 2026-01-30 03:42:10.671986 | orchestrator |  "pv": [ 2026-01-30 03:42:10.671993 | orchestrator |  { 2026-01-30 03:42:10.672000 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-30 03:42:10.672007 | orchestrator |  "vg_name": "ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267" 2026-01-30 03:42:10.672015 | orchestrator |  }, 2026-01-30 03:42:10.672022 | orchestrator |  { 2026-01-30 03:42:10.672030 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-30 03:42:10.672037 | orchestrator |  "vg_name": "ceph-a1704272-fd93-5be5-acd9-a48498ed5939" 2026-01-30 03:42:10.672044 | orchestrator |  } 2026-01-30 03:42:10.672051 | orchestrator |  ] 2026-01-30 03:42:10.672058 | orchestrator |  } 2026-01-30 03:42:10.672066 | orchestrator | } 2026-01-30 03:42:10.672074 | orchestrator | 2026-01-30 03:42:10.672081 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-30 03:42:10.672088 | orchestrator | 2026-01-30 03:42:10.672095 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 03:42:10.672103 | orchestrator | Friday 30 January 2026 03:42:05 +0000 (0:00:00.283) 0:00:48.244 ******** 2026-01-30 03:42:10.672110 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-30 03:42:10.672117 | orchestrator | 2026-01-30 03:42:10.672125 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-30 03:42:10.672132 | orchestrator | Friday 30 January 2026 03:42:05 +0000 (0:00:00.629) 0:00:48.874 ******** 2026-01-30 03:42:10.672139 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:10.672146 | orchestrator | 2026-01-30 03:42:10.672154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672161 | orchestrator | Friday 30 January 2026 03:42:05 +0000 (0:00:00.252) 0:00:49.126 ******** 2026-01-30 03:42:10.672168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-30 03:42:10.672176 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-30 03:42:10.672183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-30 03:42:10.672190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-30 03:42:10.672197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-30 03:42:10.672205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-30 03:42:10.672212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-30 03:42:10.672225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-30 03:42:10.672233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-30 03:42:10.672240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-30 03:42:10.672247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-30 03:42:10.672254 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-30 03:42:10.672262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-30 03:42:10.672269 | orchestrator | 2026-01-30 03:42:10.672276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672283 | orchestrator | Friday 30 January 2026 03:42:06 +0000 (0:00:00.407) 0:00:49.534 ******** 2026-01-30 03:42:10.672291 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672298 | orchestrator | 2026-01-30 03:42:10.672305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672312 | orchestrator | Friday 30 January 2026 03:42:06 +0000 (0:00:00.196) 0:00:49.731 ******** 2026-01-30 03:42:10.672320 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672327 | orchestrator | 2026-01-30 03:42:10.672334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672354 | orchestrator | Friday 30 January 2026 03:42:06 +0000 (0:00:00.196) 0:00:49.927 ******** 2026-01-30 03:42:10.672362 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672369 | orchestrator | 2026-01-30 03:42:10.672376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672384 | orchestrator | Friday 30 January 2026 03:42:06 +0000 (0:00:00.187) 0:00:50.115 ******** 2026-01-30 03:42:10.672391 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672398 | orchestrator | 2026-01-30 03:42:10.672406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672413 | orchestrator | Friday 30 January 2026 03:42:07 +0000 (0:00:00.195) 0:00:50.310 ******** 2026-01-30 03:42:10.672421 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672446 | orchestrator | 2026-01-30 03:42:10.672454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672461 | orchestrator | Friday 30 January 2026 03:42:07 +0000 (0:00:00.224) 0:00:50.535 ******** 2026-01-30 03:42:10.672468 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672475 | orchestrator | 2026-01-30 03:42:10.672482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672490 | orchestrator | Friday 30 January 2026 03:42:07 +0000 (0:00:00.179) 0:00:50.714 ******** 2026-01-30 03:42:10.672497 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672504 | orchestrator | 2026-01-30 03:42:10.672511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672518 | orchestrator | Friday 30 January 2026 03:42:07 +0000 (0:00:00.177) 0:00:50.891 ******** 2026-01-30 03:42:10.672525 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:10.672533 | orchestrator | 2026-01-30 03:42:10.672540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672547 | orchestrator | Friday 30 January 2026 03:42:08 +0000 (0:00:00.590) 0:00:51.482 ******** 2026-01-30 03:42:10.672554 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844) 2026-01-30 03:42:10.672563 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844) 2026-01-30 03:42:10.672570 | orchestrator | 2026-01-30 03:42:10.672577 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672584 | orchestrator | Friday 30 January 2026 03:42:08 +0000 (0:00:00.423) 0:00:51.906 ******** 2026-01-30 03:42:10.672623 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de) 2026-01-30 03:42:10.672637 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de) 2026-01-30 03:42:10.672645 | orchestrator | 2026-01-30 03:42:10.672652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672660 | orchestrator | Friday 30 January 2026 03:42:09 +0000 (0:00:00.415) 0:00:52.321 ******** 2026-01-30 03:42:10.672667 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660) 2026-01-30 03:42:10.672675 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660) 2026-01-30 03:42:10.672682 | orchestrator | 2026-01-30 03:42:10.672690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672742 | orchestrator | Friday 30 January 2026 03:42:09 +0000 (0:00:00.422) 0:00:52.743 ******** 2026-01-30 03:42:10.672751 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290) 2026-01-30 03:42:10.672758 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290) 2026-01-30 03:42:10.672766 | orchestrator | 2026-01-30 03:42:10.672773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-30 03:42:10.672781 | orchestrator | Friday 30 January 2026 03:42:09 +0000 (0:00:00.418) 0:00:53.162 ******** 2026-01-30 03:42:10.672788 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-30 03:42:10.672795 | orchestrator | 2026-01-30 03:42:10.672803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:10.672810 | orchestrator | Friday 30 January 2026 03:42:10 +0000 (0:00:00.326) 0:00:53.488 ******** 2026-01-30 03:42:10.672817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-30 03:42:10.672825 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-30 03:42:10.672832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-30 03:42:10.672839 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-30 03:42:10.672846 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-30 03:42:10.672854 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-30 03:42:10.672861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-30 03:42:10.672868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-30 03:42:10.672876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-30 03:42:10.672883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-30 03:42:10.672890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-30 03:42:10.672904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-30 03:42:19.127304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-30 03:42:19.127398 | orchestrator | 2026-01-30 03:42:19.127411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127471 | orchestrator | Friday 30 January 2026 03:42:10 +0000 (0:00:00.402) 0:00:53.891 ******** 2026-01-30 03:42:19.127484 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127496 | orchestrator | 2026-01-30 03:42:19.127508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127534 | orchestrator | Friday 30 January 2026 03:42:10 +0000 (0:00:00.191) 0:00:54.082 ******** 2026-01-30 03:42:19.127546 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127579 | orchestrator | 2026-01-30 03:42:19.127591 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127602 | orchestrator | Friday 30 January 2026 03:42:11 +0000 (0:00:00.215) 0:00:54.298 ******** 2026-01-30 03:42:19.127706 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127719 | orchestrator | 2026-01-30 03:42:19.127730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127741 | orchestrator | Friday 30 January 2026 03:42:11 +0000 (0:00:00.200) 0:00:54.499 ******** 2026-01-30 03:42:19.127752 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127763 | orchestrator | 2026-01-30 03:42:19.127774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127785 | orchestrator | Friday 30 January 2026 03:42:11 +0000 (0:00:00.197) 0:00:54.696 ******** 2026-01-30 03:42:19.127795 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127806 | orchestrator | 2026-01-30 03:42:19.127817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127828 | orchestrator | Friday 30 January 2026 03:42:12 +0000 (0:00:00.550) 0:00:55.246 ******** 2026-01-30 03:42:19.127839 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127850 | orchestrator | 2026-01-30 03:42:19.127862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127875 | orchestrator | Friday 30 January 2026 03:42:12 +0000 (0:00:00.204) 0:00:55.451 ******** 2026-01-30 03:42:19.127888 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127901 | orchestrator | 2026-01-30 03:42:19.127912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127925 | orchestrator | Friday 30 January 2026 03:42:12 +0000 (0:00:00.197) 0:00:55.648 ******** 2026-01-30 03:42:19.127938 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.127950 | orchestrator | 2026-01-30 03:42:19.127962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.127975 | orchestrator | Friday 30 January 2026 03:42:12 +0000 (0:00:00.199) 0:00:55.848 ******** 2026-01-30 03:42:19.127987 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-30 03:42:19.128000 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-30 03:42:19.128014 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-30 03:42:19.128026 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-30 03:42:19.128039 | orchestrator | 2026-01-30 03:42:19.128051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.128063 | orchestrator | Friday 30 January 2026 03:42:13 +0000 (0:00:00.652) 0:00:56.500 ******** 2026-01-30 03:42:19.128076 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128088 | orchestrator | 2026-01-30 03:42:19.128101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.128113 | orchestrator | Friday 30 January 2026 03:42:13 +0000 (0:00:00.205) 0:00:56.706 ******** 2026-01-30 03:42:19.128126 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128138 | orchestrator | 2026-01-30 03:42:19.128149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.128160 | orchestrator | Friday 30 January 2026 03:42:13 +0000 (0:00:00.201) 0:00:56.907 ******** 2026-01-30 03:42:19.128171 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128181 | orchestrator | 2026-01-30 03:42:19.128192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-30 03:42:19.128203 | orchestrator | Friday 30 January 2026 03:42:13 +0000 (0:00:00.198) 0:00:57.105 ******** 2026-01-30 03:42:19.128214 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128224 | orchestrator | 2026-01-30 03:42:19.128235 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-30 03:42:19.128246 | orchestrator | Friday 30 January 2026 03:42:14 +0000 (0:00:00.203) 0:00:57.308 ******** 2026-01-30 03:42:19.128257 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128268 | orchestrator | 2026-01-30 03:42:19.128288 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-30 03:42:19.128299 | orchestrator | Friday 30 January 2026 03:42:14 +0000 (0:00:00.133) 0:00:57.442 ******** 2026-01-30 03:42:19.128311 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c96ee3ed-1860-5729-adba-bbe0a3b53c50'}}) 2026-01-30 03:42:19.128323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}}) 2026-01-30 03:42:19.128334 | orchestrator | 2026-01-30 03:42:19.128345 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-30 03:42:19.128356 | orchestrator | Friday 30 January 2026 03:42:14 +0000 (0:00:00.178) 0:00:57.621 ******** 2026-01-30 03:42:19.128368 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}) 2026-01-30 03:42:19.128380 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}) 2026-01-30 03:42:19.128391 | orchestrator | 2026-01-30 03:42:19.128402 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-30 03:42:19.128449 | orchestrator | Friday 30 January 2026 03:42:16 +0000 (0:00:01.851) 0:00:59.472 ******** 2026-01-30 03:42:19.128462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:19.128474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:19.128485 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128496 | orchestrator | 2026-01-30 03:42:19.128514 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-30 03:42:19.128525 | orchestrator | Friday 30 January 2026 03:42:16 +0000 (0:00:00.318) 0:00:59.790 ******** 2026-01-30 03:42:19.128536 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}) 2026-01-30 03:42:19.128547 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}) 2026-01-30 03:42:19.128558 | orchestrator | 2026-01-30 03:42:19.128569 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-30 03:42:19.128580 | orchestrator | Friday 30 January 2026 03:42:17 +0000 (0:00:01.263) 0:01:01.054 ******** 2026-01-30 03:42:19.128591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:19.128602 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:19.128613 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128624 | orchestrator | 2026-01-30 03:42:19.128635 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-30 03:42:19.128646 | orchestrator | Friday 30 January 2026 03:42:17 +0000 (0:00:00.149) 0:01:01.203 ******** 2026-01-30 03:42:19.128657 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128668 | orchestrator | 2026-01-30 03:42:19.128679 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-30 03:42:19.128690 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.139) 0:01:01.343 ******** 2026-01-30 03:42:19.128701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:19.128712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:19.128730 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128741 | orchestrator | 2026-01-30 03:42:19.128752 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-30 03:42:19.128763 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.155) 0:01:01.499 ******** 2026-01-30 03:42:19.128774 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128785 | orchestrator | 2026-01-30 03:42:19.128795 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-30 03:42:19.128807 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.144) 0:01:01.644 ******** 2026-01-30 03:42:19.128818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:19.128829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:19.128839 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128850 | orchestrator | 2026-01-30 03:42:19.128861 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-30 03:42:19.128872 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.149) 0:01:01.793 ******** 2026-01-30 03:42:19.128883 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128894 | orchestrator | 2026-01-30 03:42:19.128905 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-30 03:42:19.128916 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.134) 0:01:01.928 ******** 2026-01-30 03:42:19.128927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:19.128938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:19.128949 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:19.128960 | orchestrator | 2026-01-30 03:42:19.128971 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-30 03:42:19.128982 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.147) 0:01:02.076 ******** 2026-01-30 03:42:19.128993 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:19.129004 | orchestrator | 2026-01-30 03:42:19.129015 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-30 03:42:19.129026 | orchestrator | Friday 30 January 2026 03:42:18 +0000 (0:00:00.137) 0:01:02.213 ******** 2026-01-30 03:42:19.129044 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:25.261694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:25.261808 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.261826 | orchestrator | 2026-01-30 03:42:25.261840 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-30 03:42:25.261854 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.140) 0:01:02.353 ******** 2026-01-30 03:42:25.261883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:25.261895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:25.261906 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.261918 | orchestrator | 2026-01-30 03:42:25.261930 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-30 03:42:25.261941 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.148) 0:01:02.502 ******** 2026-01-30 03:42:25.261975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:25.261987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:25.261998 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262009 | orchestrator | 2026-01-30 03:42:25.262084 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-30 03:42:25.262096 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.329) 0:01:02.832 ******** 2026-01-30 03:42:25.262107 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262118 | orchestrator | 2026-01-30 03:42:25.262129 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-30 03:42:25.262140 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.142) 0:01:02.974 ******** 2026-01-30 03:42:25.262151 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262162 | orchestrator | 2026-01-30 03:42:25.262173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-30 03:42:25.262186 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.125) 0:01:03.100 ******** 2026-01-30 03:42:25.262199 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262211 | orchestrator | 2026-01-30 03:42:25.262224 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-30 03:42:25.262238 | orchestrator | Friday 30 January 2026 03:42:19 +0000 (0:00:00.125) 0:01:03.225 ******** 2026-01-30 03:42:25.262250 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:42:25.262262 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-30 03:42:25.262275 | orchestrator | } 2026-01-30 03:42:25.262287 | orchestrator | 2026-01-30 03:42:25.262300 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-30 03:42:25.262312 | orchestrator | Friday 30 January 2026 03:42:20 +0000 (0:00:00.142) 0:01:03.368 ******** 2026-01-30 03:42:25.262325 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:42:25.262338 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-30 03:42:25.262350 | orchestrator | } 2026-01-30 03:42:25.262362 | orchestrator | 2026-01-30 03:42:25.262374 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-30 03:42:25.262387 | orchestrator | Friday 30 January 2026 03:42:20 +0000 (0:00:00.132) 0:01:03.500 ******** 2026-01-30 03:42:25.262399 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:42:25.262412 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-30 03:42:25.262447 | orchestrator | } 2026-01-30 03:42:25.262460 | orchestrator | 2026-01-30 03:42:25.262472 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-30 03:42:25.262484 | orchestrator | Friday 30 January 2026 03:42:20 +0000 (0:00:00.146) 0:01:03.647 ******** 2026-01-30 03:42:25.262497 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:25.262510 | orchestrator | 2026-01-30 03:42:25.262522 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-30 03:42:25.262535 | orchestrator | Friday 30 January 2026 03:42:20 +0000 (0:00:00.567) 0:01:04.214 ******** 2026-01-30 03:42:25.262546 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:25.262557 | orchestrator | 2026-01-30 03:42:25.262568 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-30 03:42:25.262579 | orchestrator | Friday 30 January 2026 03:42:21 +0000 (0:00:00.504) 0:01:04.718 ******** 2026-01-30 03:42:25.262590 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:25.262601 | orchestrator | 2026-01-30 03:42:25.262611 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-30 03:42:25.262622 | orchestrator | Friday 30 January 2026 03:42:21 +0000 (0:00:00.496) 0:01:05.215 ******** 2026-01-30 03:42:25.262633 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:25.262644 | orchestrator | 2026-01-30 03:42:25.262655 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-30 03:42:25.262675 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.143) 0:01:05.359 ******** 2026-01-30 03:42:25.262686 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262697 | orchestrator | 2026-01-30 03:42:25.262708 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-30 03:42:25.262719 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.121) 0:01:05.480 ******** 2026-01-30 03:42:25.262730 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262741 | orchestrator | 2026-01-30 03:42:25.262752 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-30 03:42:25.262763 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.267) 0:01:05.748 ******** 2026-01-30 03:42:25.262774 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:42:25.262785 | orchestrator |  "vgs_report": { 2026-01-30 03:42:25.262797 | orchestrator |  "vg": [] 2026-01-30 03:42:25.262826 | orchestrator |  } 2026-01-30 03:42:25.262838 | orchestrator | } 2026-01-30 03:42:25.262849 | orchestrator | 2026-01-30 03:42:25.262860 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-30 03:42:25.262871 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.145) 0:01:05.893 ******** 2026-01-30 03:42:25.262882 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262892 | orchestrator | 2026-01-30 03:42:25.262903 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-30 03:42:25.262914 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.141) 0:01:06.034 ******** 2026-01-30 03:42:25.262935 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.262954 | orchestrator | 2026-01-30 03:42:25.262979 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-30 03:42:25.263001 | orchestrator | Friday 30 January 2026 03:42:22 +0000 (0:00:00.137) 0:01:06.172 ******** 2026-01-30 03:42:25.263017 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263034 | orchestrator | 2026-01-30 03:42:25.263051 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-30 03:42:25.263068 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.147) 0:01:06.320 ******** 2026-01-30 03:42:25.263084 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263101 | orchestrator | 2026-01-30 03:42:25.263117 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-30 03:42:25.263134 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.135) 0:01:06.455 ******** 2026-01-30 03:42:25.263152 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263169 | orchestrator | 2026-01-30 03:42:25.263188 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-30 03:42:25.263206 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.135) 0:01:06.590 ******** 2026-01-30 03:42:25.263225 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263243 | orchestrator | 2026-01-30 03:42:25.263262 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-30 03:42:25.263280 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.129) 0:01:06.719 ******** 2026-01-30 03:42:25.263299 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263316 | orchestrator | 2026-01-30 03:42:25.263334 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-30 03:42:25.263353 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.131) 0:01:06.850 ******** 2026-01-30 03:42:25.263371 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263382 | orchestrator | 2026-01-30 03:42:25.263393 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-30 03:42:25.263404 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.137) 0:01:06.988 ******** 2026-01-30 03:42:25.263445 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263459 | orchestrator | 2026-01-30 03:42:25.263470 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-30 03:42:25.263481 | orchestrator | Friday 30 January 2026 03:42:23 +0000 (0:00:00.132) 0:01:07.120 ******** 2026-01-30 03:42:25.263502 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263513 | orchestrator | 2026-01-30 03:42:25.263524 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-30 03:42:25.263535 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.122) 0:01:07.242 ******** 2026-01-30 03:42:25.263547 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263565 | orchestrator | 2026-01-30 03:42:25.263583 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-30 03:42:25.263601 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.316) 0:01:07.558 ******** 2026-01-30 03:42:25.263620 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263637 | orchestrator | 2026-01-30 03:42:25.263656 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-30 03:42:25.263675 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.149) 0:01:07.708 ******** 2026-01-30 03:42:25.263687 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263703 | orchestrator | 2026-01-30 03:42:25.263722 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-30 03:42:25.263739 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.136) 0:01:07.844 ******** 2026-01-30 03:42:25.263757 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263775 | orchestrator | 2026-01-30 03:42:25.263794 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-30 03:42:25.263813 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.139) 0:01:07.983 ******** 2026-01-30 03:42:25.263827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:25.263846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:25.263864 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263881 | orchestrator | 2026-01-30 03:42:25.263899 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-30 03:42:25.263917 | orchestrator | Friday 30 January 2026 03:42:24 +0000 (0:00:00.165) 0:01:08.149 ******** 2026-01-30 03:42:25.263934 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:25.263952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:25.263971 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:25.263990 | orchestrator | 2026-01-30 03:42:25.264008 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-30 03:42:25.264027 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.167) 0:01:08.317 ******** 2026-01-30 03:42:25.264060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.143946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144068 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144094 | orchestrator | 2026-01-30 03:42:28.144138 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-30 03:42:28.144161 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.172) 0:01:08.489 ******** 2026-01-30 03:42:28.144180 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.144200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144246 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144266 | orchestrator | 2026-01-30 03:42:28.144285 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-30 03:42:28.144303 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.143) 0:01:08.632 ******** 2026-01-30 03:42:28.144321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.144339 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144359 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144377 | orchestrator | 2026-01-30 03:42:28.144397 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-30 03:42:28.144481 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.149) 0:01:08.782 ******** 2026-01-30 03:42:28.144504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.144523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144541 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144555 | orchestrator | 2026-01-30 03:42:28.144567 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-30 03:42:28.144580 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.145) 0:01:08.928 ******** 2026-01-30 03:42:28.144594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.144613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144631 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144650 | orchestrator | 2026-01-30 03:42:28.144669 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-30 03:42:28.144686 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.150) 0:01:09.078 ******** 2026-01-30 03:42:28.144705 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.144717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.144728 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.144739 | orchestrator | 2026-01-30 03:42:28.144750 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-30 03:42:28.144761 | orchestrator | Friday 30 January 2026 03:42:25 +0000 (0:00:00.147) 0:01:09.226 ******** 2026-01-30 03:42:28.144771 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:28.144807 | orchestrator | 2026-01-30 03:42:28.144819 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-30 03:42:28.144830 | orchestrator | Friday 30 January 2026 03:42:26 +0000 (0:00:00.725) 0:01:09.951 ******** 2026-01-30 03:42:28.144841 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:28.144852 | orchestrator | 2026-01-30 03:42:28.144863 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-30 03:42:28.144874 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.497) 0:01:10.449 ******** 2026-01-30 03:42:28.144885 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:28.144896 | orchestrator | 2026-01-30 03:42:28.144907 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-30 03:42:28.144918 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.135) 0:01:10.584 ******** 2026-01-30 03:42:28.144947 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'vg_name': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}) 2026-01-30 03:42:28.144970 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'vg_name': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}) 2026-01-30 03:42:28.144989 | orchestrator | 2026-01-30 03:42:28.145007 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-30 03:42:28.145027 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.155) 0:01:10.740 ******** 2026-01-30 03:42:28.145068 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.145100 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.145120 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.145142 | orchestrator | 2026-01-30 03:42:28.145160 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-30 03:42:28.145178 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.153) 0:01:10.893 ******** 2026-01-30 03:42:28.145198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.145216 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.145234 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.145245 | orchestrator | 2026-01-30 03:42:28.145256 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-30 03:42:28.145267 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.153) 0:01:11.047 ******** 2026-01-30 03:42:28.145278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 03:42:28.145289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 03:42:28.145300 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:28.145310 | orchestrator | 2026-01-30 03:42:28.145321 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-30 03:42:28.145332 | orchestrator | Friday 30 January 2026 03:42:27 +0000 (0:00:00.147) 0:01:11.195 ******** 2026-01-30 03:42:28.145343 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 03:42:28.145354 | orchestrator |  "lvm_report": { 2026-01-30 03:42:28.145365 | orchestrator |  "lv": [ 2026-01-30 03:42:28.145376 | orchestrator |  { 2026-01-30 03:42:28.145388 | orchestrator |  "lv_name": "osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd", 2026-01-30 03:42:28.145400 | orchestrator |  "vg_name": "ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd" 2026-01-30 03:42:28.145411 | orchestrator |  }, 2026-01-30 03:42:28.145457 | orchestrator |  { 2026-01-30 03:42:28.145469 | orchestrator |  "lv_name": "osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50", 2026-01-30 03:42:28.145480 | orchestrator |  "vg_name": "ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50" 2026-01-30 03:42:28.145490 | orchestrator |  } 2026-01-30 03:42:28.145501 | orchestrator |  ], 2026-01-30 03:42:28.145512 | orchestrator |  "pv": [ 2026-01-30 03:42:28.145523 | orchestrator |  { 2026-01-30 03:42:28.145534 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-30 03:42:28.145545 | orchestrator |  "vg_name": "ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50" 2026-01-30 03:42:28.145556 | orchestrator |  }, 2026-01-30 03:42:28.145567 | orchestrator |  { 2026-01-30 03:42:28.145577 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-30 03:42:28.145608 | orchestrator |  "vg_name": "ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd" 2026-01-30 03:42:28.145625 | orchestrator |  } 2026-01-30 03:42:28.145641 | orchestrator |  ] 2026-01-30 03:42:28.145659 | orchestrator |  } 2026-01-30 03:42:28.145671 | orchestrator | } 2026-01-30 03:42:28.145681 | orchestrator | 2026-01-30 03:42:28.145691 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:42:28.145701 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-30 03:42:28.145711 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-30 03:42:28.145721 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-30 03:42:28.145730 | orchestrator | 2026-01-30 03:42:28.145740 | orchestrator | 2026-01-30 03:42:28.145750 | orchestrator | 2026-01-30 03:42:28.145759 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:42:28.145769 | orchestrator | Friday 30 January 2026 03:42:28 +0000 (0:00:00.149) 0:01:11.344 ******** 2026-01-30 03:42:28.145779 | orchestrator | =============================================================================== 2026-01-30 03:42:28.145788 | orchestrator | Create block VGs -------------------------------------------------------- 5.65s 2026-01-30 03:42:28.145798 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2026-01-30 03:42:28.145807 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.79s 2026-01-30 03:42:28.145817 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.76s 2026-01-30 03:42:28.145826 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.60s 2026-01-30 03:42:28.145836 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-01-30 03:42:28.145845 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2026-01-30 03:42:28.145855 | orchestrator | Add known links to the list of available block devices ------------------ 1.27s 2026-01-30 03:42:28.145873 | orchestrator | Add known partitions to the list of available block devices ------------- 1.23s 2026-01-30 03:42:28.435467 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.12s 2026-01-30 03:42:28.435567 | orchestrator | Add known links to the list of available block devices ------------------ 1.05s 2026-01-30 03:42:28.435576 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-01-30 03:42:28.435599 | orchestrator | Print LVM report data --------------------------------------------------- 0.72s 2026-01-30 03:42:28.435606 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-01-30 03:42:28.435613 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2026-01-30 03:42:28.435621 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.67s 2026-01-30 03:42:28.435628 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-01-30 03:42:28.435636 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-01-30 03:42:28.435643 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.65s 2026-01-30 03:42:28.435651 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.64s 2026-01-30 03:42:40.794556 | orchestrator | 2026-01-30 03:42:40 | INFO  | Task f07f12f6-650a-4ad3-9b63-4ffd8371b957 (facts) was prepared for execution. 2026-01-30 03:42:40.794647 | orchestrator | 2026-01-30 03:42:40 | INFO  | It takes a moment until task f07f12f6-650a-4ad3-9b63-4ffd8371b957 (facts) has been started and output is visible here. 2026-01-30 03:42:52.804825 | orchestrator | 2026-01-30 03:42:52.804965 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-30 03:42:52.805021 | orchestrator | 2026-01-30 03:42:52.805040 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-30 03:42:52.805057 | orchestrator | Friday 30 January 2026 03:42:44 +0000 (0:00:00.196) 0:00:00.196 ******** 2026-01-30 03:42:52.805075 | orchestrator | ok: [testbed-manager] 2026-01-30 03:42:52.805092 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:42:52.805107 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:42:52.805124 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:42:52.805139 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:42:52.805155 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:52.805170 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:52.805185 | orchestrator | 2026-01-30 03:42:52.805200 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-30 03:42:52.805214 | orchestrator | Friday 30 January 2026 03:42:45 +0000 (0:00:00.921) 0:00:01.118 ******** 2026-01-30 03:42:52.805229 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:42:52.805245 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:42:52.805262 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:42:52.805278 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:42:52.805294 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:42:52.805309 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:52.805324 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:52.805339 | orchestrator | 2026-01-30 03:42:52.805356 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 03:42:52.805372 | orchestrator | 2026-01-30 03:42:52.805389 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 03:42:52.805436 | orchestrator | Friday 30 January 2026 03:42:46 +0000 (0:00:01.101) 0:00:02.219 ******** 2026-01-30 03:42:52.805457 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:42:52.805474 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:42:52.805491 | orchestrator | ok: [testbed-manager] 2026-01-30 03:42:52.805508 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:42:52.805524 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:42:52.805540 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:42:52.805554 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:42:52.805565 | orchestrator | 2026-01-30 03:42:52.805577 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-30 03:42:52.805588 | orchestrator | 2026-01-30 03:42:52.805600 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-30 03:42:52.805611 | orchestrator | Friday 30 January 2026 03:42:51 +0000 (0:00:05.654) 0:00:07.874 ******** 2026-01-30 03:42:52.805622 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:42:52.805633 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:42:52.805644 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:42:52.805655 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:42:52.805667 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:42:52.805678 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:42:52.805688 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:42:52.805699 | orchestrator | 2026-01-30 03:42:52.805710 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:42:52.805721 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805732 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805742 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805753 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805763 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805786 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805795 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:42:52.805805 | orchestrator | 2026-01-30 03:42:52.805815 | orchestrator | 2026-01-30 03:42:52.805824 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:42:52.805849 | orchestrator | Friday 30 January 2026 03:42:52 +0000 (0:00:00.529) 0:00:08.403 ******** 2026-01-30 03:42:52.805859 | orchestrator | =============================================================================== 2026-01-30 03:42:52.805869 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2026-01-30 03:42:52.805879 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-01-30 03:42:52.805888 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2026-01-30 03:42:52.805898 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2026-01-30 03:42:55.108365 | orchestrator | 2026-01-30 03:42:55 | INFO  | Task defa659b-3ca6-4601-b8da-b49b5e2602ea (ceph) was prepared for execution. 2026-01-30 03:42:55.108500 | orchestrator | 2026-01-30 03:42:55 | INFO  | It takes a moment until task defa659b-3ca6-4601-b8da-b49b5e2602ea (ceph) has been started and output is visible here. 2026-01-30 03:43:12.224377 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 03:43:12.224537 | orchestrator | 2.16.14 2026-01-30 03:43:12.224558 | orchestrator | 2026-01-30 03:43:12.224572 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-30 03:43:12.224584 | orchestrator | 2026-01-30 03:43:12.224596 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 03:43:12.224608 | orchestrator | Friday 30 January 2026 03:42:59 +0000 (0:00:00.753) 0:00:00.753 ******** 2026-01-30 03:43:12.224621 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:43:12.224633 | orchestrator | 2026-01-30 03:43:12.224645 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 03:43:12.224656 | orchestrator | Friday 30 January 2026 03:43:01 +0000 (0:00:01.085) 0:00:01.838 ******** 2026-01-30 03:43:12.224668 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.224679 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.224690 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.224701 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.224712 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.224723 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.224735 | orchestrator | 2026-01-30 03:43:12.224746 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 03:43:12.224758 | orchestrator | Friday 30 January 2026 03:43:02 +0000 (0:00:01.227) 0:00:03.066 ******** 2026-01-30 03:43:12.224769 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.224780 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.224791 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.224802 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.224812 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.224823 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.224834 | orchestrator | 2026-01-30 03:43:12.224845 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 03:43:12.224856 | orchestrator | Friday 30 January 2026 03:43:02 +0000 (0:00:00.708) 0:00:03.775 ******** 2026-01-30 03:43:12.224867 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.224879 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.224889 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.224900 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.224937 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.224948 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.224959 | orchestrator | 2026-01-30 03:43:12.224970 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 03:43:12.224981 | orchestrator | Friday 30 January 2026 03:43:03 +0000 (0:00:00.941) 0:00:04.717 ******** 2026-01-30 03:43:12.224992 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.225003 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.225014 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.225024 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.225035 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.225046 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.225057 | orchestrator | 2026-01-30 03:43:12.225068 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 03:43:12.225079 | orchestrator | Friday 30 January 2026 03:43:04 +0000 (0:00:00.721) 0:00:05.438 ******** 2026-01-30 03:43:12.225090 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.225101 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.225111 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.225122 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.225133 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.225144 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.225154 | orchestrator | 2026-01-30 03:43:12.225165 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 03:43:12.225177 | orchestrator | Friday 30 January 2026 03:43:05 +0000 (0:00:00.565) 0:00:06.004 ******** 2026-01-30 03:43:12.225187 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.225198 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.225209 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.225220 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.225230 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.225241 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.225252 | orchestrator | 2026-01-30 03:43:12.225263 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 03:43:12.225274 | orchestrator | Friday 30 January 2026 03:43:05 +0000 (0:00:00.773) 0:00:06.778 ******** 2026-01-30 03:43:12.225285 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:12.225297 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:12.225308 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:12.225324 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:12.225343 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:12.225361 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:12.225379 | orchestrator | 2026-01-30 03:43:12.225422 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 03:43:12.225443 | orchestrator | Friday 30 January 2026 03:43:06 +0000 (0:00:00.610) 0:00:07.389 ******** 2026-01-30 03:43:12.225461 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.225472 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.225483 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.225494 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.225505 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.225530 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.225542 | orchestrator | 2026-01-30 03:43:12.225553 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 03:43:12.225564 | orchestrator | Friday 30 January 2026 03:43:07 +0000 (0:00:00.731) 0:00:08.120 ******** 2026-01-30 03:43:12.225575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:43:12.225586 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:43:12.225598 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:43:12.225608 | orchestrator | 2026-01-30 03:43:12.225619 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 03:43:12.225630 | orchestrator | Friday 30 January 2026 03:43:07 +0000 (0:00:00.660) 0:00:08.780 ******** 2026-01-30 03:43:12.225651 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:12.225662 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:12.225672 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:12.225703 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:12.225715 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:12.225726 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:12.225737 | orchestrator | 2026-01-30 03:43:12.225748 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 03:43:12.225758 | orchestrator | Friday 30 January 2026 03:43:08 +0000 (0:00:00.681) 0:00:09.462 ******** 2026-01-30 03:43:12.225769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:43:12.225781 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:43:12.225791 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:43:12.225802 | orchestrator | 2026-01-30 03:43:12.225813 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 03:43:12.225824 | orchestrator | Friday 30 January 2026 03:43:10 +0000 (0:00:02.270) 0:00:11.733 ******** 2026-01-30 03:43:12.225836 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 03:43:12.225848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 03:43:12.225859 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 03:43:12.225870 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:12.225881 | orchestrator | 2026-01-30 03:43:12.225892 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 03:43:12.225903 | orchestrator | Friday 30 January 2026 03:43:11 +0000 (0:00:00.399) 0:00:12.133 ******** 2026-01-30 03:43:12.225915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.225930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.225941 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.225952 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:12.225963 | orchestrator | 2026-01-30 03:43:12.225974 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 03:43:12.225985 | orchestrator | Friday 30 January 2026 03:43:11 +0000 (0:00:00.591) 0:00:12.725 ******** 2026-01-30 03:43:12.225998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.226012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.226100 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:12.226133 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:12.226153 | orchestrator | 2026-01-30 03:43:12.226179 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 03:43:12.226199 | orchestrator | Friday 30 January 2026 03:43:12 +0000 (0:00:00.153) 0:00:12.878 ******** 2026-01-30 03:43:12.226233 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 03:43:09.464050', 'end': '2026-01-30 03:43:09.515610', 'delta': '0:00:00.051560', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 03:43:21.178351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 03:43:10.016613', 'end': '2026-01-30 03:43:10.056086', 'delta': '0:00:00.039473', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 03:43:21.178483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 03:43:10.546366', 'end': '2026-01-30 03:43:10.590133', 'delta': '0:00:00.043767', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 03:43:21.178497 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.178507 | orchestrator | 2026-01-30 03:43:21.178517 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 03:43:21.178527 | orchestrator | Friday 30 January 2026 03:43:12 +0000 (0:00:00.160) 0:00:13.039 ******** 2026-01-30 03:43:21.178535 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:21.178544 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:21.178552 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:21.178560 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:21.178568 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:21.178576 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:21.178584 | orchestrator | 2026-01-30 03:43:21.178598 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 03:43:21.178611 | orchestrator | Friday 30 January 2026 03:43:12 +0000 (0:00:00.687) 0:00:13.726 ******** 2026-01-30 03:43:21.178624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:43:21.178637 | orchestrator | 2026-01-30 03:43:21.178650 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 03:43:21.178663 | orchestrator | Friday 30 January 2026 03:43:13 +0000 (0:00:00.837) 0:00:14.563 ******** 2026-01-30 03:43:21.178705 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.178720 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.178738 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.178757 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.178771 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.178784 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.178798 | orchestrator | 2026-01-30 03:43:21.178812 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 03:43:21.178826 | orchestrator | Friday 30 January 2026 03:43:14 +0000 (0:00:00.788) 0:00:15.352 ******** 2026-01-30 03:43:21.178841 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.178855 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.178868 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.178882 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.178896 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.178912 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.178928 | orchestrator | 2026-01-30 03:43:21.178943 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 03:43:21.178956 | orchestrator | Friday 30 January 2026 03:43:15 +0000 (0:00:01.041) 0:00:16.393 ******** 2026-01-30 03:43:21.178964 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.178972 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.178980 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.178988 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.178996 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179017 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179025 | orchestrator | 2026-01-30 03:43:21.179033 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 03:43:21.179041 | orchestrator | Friday 30 January 2026 03:43:16 +0000 (0:00:00.544) 0:00:16.938 ******** 2026-01-30 03:43:21.179049 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179057 | orchestrator | 2026-01-30 03:43:21.179065 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 03:43:21.179073 | orchestrator | Friday 30 January 2026 03:43:16 +0000 (0:00:00.104) 0:00:17.042 ******** 2026-01-30 03:43:21.179081 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179089 | orchestrator | 2026-01-30 03:43:21.179096 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 03:43:21.179105 | orchestrator | Friday 30 January 2026 03:43:16 +0000 (0:00:00.197) 0:00:17.240 ******** 2026-01-30 03:43:21.179112 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179120 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179128 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179136 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179144 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179152 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179160 | orchestrator | 2026-01-30 03:43:21.179185 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 03:43:21.179194 | orchestrator | Friday 30 January 2026 03:43:17 +0000 (0:00:00.690) 0:00:17.931 ******** 2026-01-30 03:43:21.179202 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179210 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179218 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179226 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179234 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179241 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179249 | orchestrator | 2026-01-30 03:43:21.179257 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 03:43:21.179265 | orchestrator | Friday 30 January 2026 03:43:17 +0000 (0:00:00.591) 0:00:18.522 ******** 2026-01-30 03:43:21.179273 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179281 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179289 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179305 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179313 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179321 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179329 | orchestrator | 2026-01-30 03:43:21.179337 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 03:43:21.179345 | orchestrator | Friday 30 January 2026 03:43:18 +0000 (0:00:00.750) 0:00:19.272 ******** 2026-01-30 03:43:21.179353 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179360 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179368 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179376 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179384 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179416 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179424 | orchestrator | 2026-01-30 03:43:21.179432 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 03:43:21.179440 | orchestrator | Friday 30 January 2026 03:43:19 +0000 (0:00:00.558) 0:00:19.831 ******** 2026-01-30 03:43:21.179448 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179456 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179464 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179471 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179479 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179487 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179495 | orchestrator | 2026-01-30 03:43:21.179503 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 03:43:21.179511 | orchestrator | Friday 30 January 2026 03:43:19 +0000 (0:00:00.711) 0:00:20.542 ******** 2026-01-30 03:43:21.179519 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179527 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179534 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179542 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179550 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179558 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179565 | orchestrator | 2026-01-30 03:43:21.179574 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 03:43:21.179583 | orchestrator | Friday 30 January 2026 03:43:20 +0000 (0:00:00.573) 0:00:21.116 ******** 2026-01-30 03:43:21.179591 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.179598 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.179606 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.179614 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.179622 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.179630 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:21.179638 | orchestrator | 2026-01-30 03:43:21.179646 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 03:43:21.179653 | orchestrator | Friday 30 January 2026 03:43:21 +0000 (0:00:00.772) 0:00:21.888 ******** 2026-01-30 03:43:21.179663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.179679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.179700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.275647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.275692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.275719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.275762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.390776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.390875 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.390892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.390905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.390917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.390929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.390992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.391006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.391019 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:21.391082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.391100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.391115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.391143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.391165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.607710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.607835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607848 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.607994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.608008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.608030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.608060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.608093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.742952 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:21.743076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.743103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.743159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.743440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.743465 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:21.743487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.743641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.957959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.957984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:21.957997 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:21.958011 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:21.958095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:21.958217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:43:22.340505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:22.341726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:43:22.341812 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:22.341831 | orchestrator | 2026-01-30 03:43:22.341849 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 03:43:22.341867 | orchestrator | Friday 30 January 2026 03:43:21 +0000 (0:00:00.888) 0:00:22.776 ******** 2026-01-30 03:43:22.341885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.341962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.341982 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342001 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.342223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.361914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362115 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.362263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454340 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:22.454371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454458 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.454564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598844 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598871 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598932 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.598976 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:22.598990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.599001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.599033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677541 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677574 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677650 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677663 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677740 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677752 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677772 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.677795 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793041 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793188 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:22.793203 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793249 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793261 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793291 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793310 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:22.793344 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020799 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:23.020813 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:23.020824 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020835 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020844 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020853 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020862 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020916 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020927 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020937 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:23.020977 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:43:33.495241 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.495352 | orchestrator | 2026-01-30 03:43:33.495369 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 03:43:33.495382 | orchestrator | Friday 30 January 2026 03:43:23 +0000 (0:00:01.058) 0:00:23.835 ******** 2026-01-30 03:43:33.495443 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:33.495455 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:33.495465 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:33.495475 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:33.495485 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:33.495495 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:33.495505 | orchestrator | 2026-01-30 03:43:33.495515 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 03:43:33.495526 | orchestrator | Friday 30 January 2026 03:43:23 +0000 (0:00:00.908) 0:00:24.744 ******** 2026-01-30 03:43:33.495536 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:33.495546 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:33.495557 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:33.495575 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:33.495590 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:33.495606 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:33.495622 | orchestrator | 2026-01-30 03:43:33.495638 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 03:43:33.495655 | orchestrator | Friday 30 January 2026 03:43:24 +0000 (0:00:00.721) 0:00:25.466 ******** 2026-01-30 03:43:33.495673 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.495690 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.495706 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.495722 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.495739 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.495757 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.495775 | orchestrator | 2026-01-30 03:43:33.495793 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 03:43:33.495813 | orchestrator | Friday 30 January 2026 03:43:25 +0000 (0:00:00.552) 0:00:26.018 ******** 2026-01-30 03:43:33.495829 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.495845 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.495863 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.495881 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.495899 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.495916 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.495935 | orchestrator | 2026-01-30 03:43:33.495953 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 03:43:33.495972 | orchestrator | Friday 30 January 2026 03:43:25 +0000 (0:00:00.737) 0:00:26.755 ******** 2026-01-30 03:43:33.495991 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.496010 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.496029 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.496065 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.496075 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.496085 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.496094 | orchestrator | 2026-01-30 03:43:33.496104 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 03:43:33.496114 | orchestrator | Friday 30 January 2026 03:43:26 +0000 (0:00:00.591) 0:00:27.346 ******** 2026-01-30 03:43:33.496124 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.496133 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.496143 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.496153 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.496162 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.496172 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.496182 | orchestrator | 2026-01-30 03:43:33.496191 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 03:43:33.496201 | orchestrator | Friday 30 January 2026 03:43:27 +0000 (0:00:00.736) 0:00:28.083 ******** 2026-01-30 03:43:33.496211 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 03:43:33.496221 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 03:43:33.496231 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 03:43:33.496241 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 03:43:33.496250 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 03:43:33.496259 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 03:43:33.496269 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 03:43:33.496279 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 03:43:33.496288 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-30 03:43:33.496298 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 03:43:33.496307 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 03:43:33.496317 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 03:43:33.496326 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-30 03:43:33.496335 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 03:43:33.496345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 03:43:33.496354 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-30 03:43:33.496364 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-30 03:43:33.496437 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 03:43:33.496450 | orchestrator | 2026-01-30 03:43:33.496460 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 03:43:33.496470 | orchestrator | Friday 30 January 2026 03:43:28 +0000 (0:00:01.515) 0:00:29.598 ******** 2026-01-30 03:43:33.496480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 03:43:33.496490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 03:43:33.496500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 03:43:33.496510 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.496519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 03:43:33.496529 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 03:43:33.496539 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 03:43:33.496570 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.496580 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 03:43:33.496590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 03:43:33.496599 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 03:43:33.496609 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.496622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 03:43:33.496638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 03:43:33.496666 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 03:43:33.496682 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.496698 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 03:43:33.496712 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 03:43:33.496729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 03:43:33.496745 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.496761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 03:43:33.496777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 03:43:33.496794 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 03:43:33.496811 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.496827 | orchestrator | 2026-01-30 03:43:33.496843 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 03:43:33.496854 | orchestrator | Friday 30 January 2026 03:43:29 +0000 (0:00:00.827) 0:00:30.426 ******** 2026-01-30 03:43:33.496864 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:33.496873 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:33.496883 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:33.496893 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:43:33.496903 | orchestrator | 2026-01-30 03:43:33.496913 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 03:43:33.496924 | orchestrator | Friday 30 January 2026 03:43:30 +0000 (0:00:00.922) 0:00:31.349 ******** 2026-01-30 03:43:33.496934 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.496944 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.496953 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.496963 | orchestrator | 2026-01-30 03:43:33.496972 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 03:43:33.496982 | orchestrator | Friday 30 January 2026 03:43:30 +0000 (0:00:00.316) 0:00:31.666 ******** 2026-01-30 03:43:33.496992 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.497001 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.497011 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.497020 | orchestrator | 2026-01-30 03:43:33.497029 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 03:43:33.497039 | orchestrator | Friday 30 January 2026 03:43:31 +0000 (0:00:00.310) 0:00:31.976 ******** 2026-01-30 03:43:33.497049 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.497058 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:33.497068 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:33.497077 | orchestrator | 2026-01-30 03:43:33.497086 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 03:43:33.497096 | orchestrator | Friday 30 January 2026 03:43:31 +0000 (0:00:00.456) 0:00:32.432 ******** 2026-01-30 03:43:33.497106 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:33.497115 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:33.497125 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:33.497134 | orchestrator | 2026-01-30 03:43:33.497143 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 03:43:33.497153 | orchestrator | Friday 30 January 2026 03:43:32 +0000 (0:00:00.417) 0:00:32.850 ******** 2026-01-30 03:43:33.497162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:43:33.497172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:43:33.497182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:43:33.497191 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.497201 | orchestrator | 2026-01-30 03:43:33.497210 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 03:43:33.497228 | orchestrator | Friday 30 January 2026 03:43:32 +0000 (0:00:00.391) 0:00:33.241 ******** 2026-01-30 03:43:33.497238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:43:33.497247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:43:33.497257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:43:33.497266 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.497276 | orchestrator | 2026-01-30 03:43:33.497285 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 03:43:33.497295 | orchestrator | Friday 30 January 2026 03:43:32 +0000 (0:00:00.362) 0:00:33.603 ******** 2026-01-30 03:43:33.497311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:43:33.497321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:43:33.497331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:43:33.497340 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:33.497350 | orchestrator | 2026-01-30 03:43:33.497359 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 03:43:33.497369 | orchestrator | Friday 30 January 2026 03:43:33 +0000 (0:00:00.367) 0:00:33.971 ******** 2026-01-30 03:43:33.497378 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:33.497412 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:33.497423 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:33.497433 | orchestrator | 2026-01-30 03:43:33.497442 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 03:43:33.497568 | orchestrator | Friday 30 January 2026 03:43:33 +0000 (0:00:00.338) 0:00:34.310 ******** 2026-01-30 03:43:52.027202 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 03:43:52.027339 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 03:43:52.027358 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 03:43:52.027371 | orchestrator | 2026-01-30 03:43:52.027449 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 03:43:52.027471 | orchestrator | Friday 30 January 2026 03:43:34 +0000 (0:00:00.875) 0:00:35.186 ******** 2026-01-30 03:43:52.027488 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:43:52.027506 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:43:52.027524 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:43:52.027542 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 03:43:52.027559 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 03:43:52.027578 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 03:43:52.027597 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 03:43:52.027617 | orchestrator | 2026-01-30 03:43:52.027636 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 03:43:52.027655 | orchestrator | Friday 30 January 2026 03:43:35 +0000 (0:00:00.754) 0:00:35.940 ******** 2026-01-30 03:43:52.027667 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:43:52.027678 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:43:52.027689 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:43:52.027701 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 03:43:52.027712 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 03:43:52.027723 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 03:43:52.027734 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 03:43:52.027745 | orchestrator | 2026-01-30 03:43:52.027755 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:43:52.027796 | orchestrator | Friday 30 January 2026 03:43:36 +0000 (0:00:01.841) 0:00:37.782 ******** 2026-01-30 03:43:52.027808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:43:52.027821 | orchestrator | 2026-01-30 03:43:52.027832 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:43:52.027843 | orchestrator | Friday 30 January 2026 03:43:38 +0000 (0:00:01.161) 0:00:38.944 ******** 2026-01-30 03:43:52.027854 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:43:52.027865 | orchestrator | 2026-01-30 03:43:52.027876 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:43:52.027887 | orchestrator | Friday 30 January 2026 03:43:39 +0000 (0:00:01.178) 0:00:40.123 ******** 2026-01-30 03:43:52.027899 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.027910 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.027921 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.027932 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:52.027943 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:52.027954 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:52.027965 | orchestrator | 2026-01-30 03:43:52.027976 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:43:52.027992 | orchestrator | Friday 30 January 2026 03:43:40 +0000 (0:00:01.229) 0:00:41.352 ******** 2026-01-30 03:43:52.028011 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.028030 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.028047 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.028064 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.028083 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.028098 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.028117 | orchestrator | 2026-01-30 03:43:52.028135 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:43:52.028151 | orchestrator | Friday 30 January 2026 03:43:41 +0000 (0:00:00.671) 0:00:42.024 ******** 2026-01-30 03:43:52.028169 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.028188 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.028207 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.028227 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.028246 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.028264 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.028277 | orchestrator | 2026-01-30 03:43:52.028314 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:43:52.028334 | orchestrator | Friday 30 January 2026 03:43:42 +0000 (0:00:00.836) 0:00:42.860 ******** 2026-01-30 03:43:52.028352 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.028370 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.028452 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.028472 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.028484 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.028495 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.028506 | orchestrator | 2026-01-30 03:43:52.028517 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:43:52.028528 | orchestrator | Friday 30 January 2026 03:43:42 +0000 (0:00:00.692) 0:00:43.553 ******** 2026-01-30 03:43:52.028539 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.028550 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.028583 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.028595 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:52.028606 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:52.028617 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:52.028628 | orchestrator | 2026-01-30 03:43:52.028640 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:43:52.028664 | orchestrator | Friday 30 January 2026 03:43:43 +0000 (0:00:01.253) 0:00:44.807 ******** 2026-01-30 03:43:52.028675 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.028686 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.028697 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.028708 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.028719 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.028730 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.028741 | orchestrator | 2026-01-30 03:43:52.028752 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:43:52.028763 | orchestrator | Friday 30 January 2026 03:43:44 +0000 (0:00:00.595) 0:00:45.403 ******** 2026-01-30 03:43:52.028774 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.028785 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.028796 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.028806 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.028817 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.028828 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.028839 | orchestrator | 2026-01-30 03:43:52.028850 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:43:52.028861 | orchestrator | Friday 30 January 2026 03:43:45 +0000 (0:00:00.755) 0:00:46.159 ******** 2026-01-30 03:43:52.028872 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.028883 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.028894 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.028905 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:52.028916 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:52.028927 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:52.028937 | orchestrator | 2026-01-30 03:43:52.028949 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:43:52.028959 | orchestrator | Friday 30 January 2026 03:43:46 +0000 (0:00:01.008) 0:00:47.167 ******** 2026-01-30 03:43:52.028970 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.028981 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.028992 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.029002 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:52.029013 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:52.029024 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:52.029035 | orchestrator | 2026-01-30 03:43:52.029046 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:43:52.029057 | orchestrator | Friday 30 January 2026 03:43:47 +0000 (0:00:01.219) 0:00:48.386 ******** 2026-01-30 03:43:52.029068 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.029079 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.029089 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.029100 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.029111 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.029122 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.029133 | orchestrator | 2026-01-30 03:43:52.029144 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:43:52.029155 | orchestrator | Friday 30 January 2026 03:43:48 +0000 (0:00:00.592) 0:00:48.979 ******** 2026-01-30 03:43:52.029170 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.029189 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.029207 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.029225 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:43:52.029242 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:43:52.029259 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:43:52.029277 | orchestrator | 2026-01-30 03:43:52.029294 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:43:52.029311 | orchestrator | Friday 30 January 2026 03:43:48 +0000 (0:00:00.831) 0:00:49.810 ******** 2026-01-30 03:43:52.029328 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.029345 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.029377 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.029438 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.029457 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.029475 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.029486 | orchestrator | 2026-01-30 03:43:52.029497 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:43:52.029508 | orchestrator | Friday 30 January 2026 03:43:49 +0000 (0:00:00.595) 0:00:50.406 ******** 2026-01-30 03:43:52.029519 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.029530 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.029541 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.029552 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.029562 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.029573 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.029584 | orchestrator | 2026-01-30 03:43:52.029595 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:43:52.029606 | orchestrator | Friday 30 January 2026 03:43:50 +0000 (0:00:00.818) 0:00:51.225 ******** 2026-01-30 03:43:52.029617 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:43:52.029628 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:43:52.029639 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:43:52.029649 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.029660 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.029679 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.029690 | orchestrator | 2026-01-30 03:43:52.029701 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:43:52.029712 | orchestrator | Friday 30 January 2026 03:43:50 +0000 (0:00:00.587) 0:00:51.812 ******** 2026-01-30 03:43:52.029723 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.029734 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:43:52.029745 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:43:52.029755 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:43:52.029766 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:43:52.029777 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:43:52.029788 | orchestrator | 2026-01-30 03:43:52.029799 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:43:52.029810 | orchestrator | Friday 30 January 2026 03:43:51 +0000 (0:00:00.754) 0:00:52.567 ******** 2026-01-30 03:43:52.029821 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:43:52.029842 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.868890 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.869079 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.869109 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.869129 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.869149 | orchestrator | 2026-01-30 03:45:04.869170 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:45:04.869190 | orchestrator | Friday 30 January 2026 03:43:52 +0000 (0:00:00.595) 0:00:53.163 ******** 2026-01-30 03:45:04.869205 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.869225 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.869238 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.869250 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:04.869262 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:04.869274 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:04.869284 | orchestrator | 2026-01-30 03:45:04.869296 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:45:04.869307 | orchestrator | Friday 30 January 2026 03:43:53 +0000 (0:00:00.830) 0:00:53.993 ******** 2026-01-30 03:45:04.869318 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:04.869329 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:04.869340 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:04.869395 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:04.869411 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:04.869425 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:04.869470 | orchestrator | 2026-01-30 03:45:04.869484 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:45:04.869497 | orchestrator | Friday 30 January 2026 03:43:53 +0000 (0:00:00.610) 0:00:54.604 ******** 2026-01-30 03:45:04.869511 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:04.869522 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:04.869533 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:04.869544 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:04.869555 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:04.869566 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:04.869578 | orchestrator | 2026-01-30 03:45:04.869589 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 03:45:04.869601 | orchestrator | Friday 30 January 2026 03:43:55 +0000 (0:00:01.236) 0:00:55.840 ******** 2026-01-30 03:45:04.869612 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:45:04.869623 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:45:04.869634 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:45:04.869645 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:45:04.869656 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:45:04.869667 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:45:04.869677 | orchestrator | 2026-01-30 03:45:04.869688 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 03:45:04.869700 | orchestrator | Friday 30 January 2026 03:43:56 +0000 (0:00:01.657) 0:00:57.497 ******** 2026-01-30 03:45:04.869711 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:45:04.869722 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:45:04.869733 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:45:04.869744 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:45:04.869755 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:45:04.869766 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:45:04.869777 | orchestrator | 2026-01-30 03:45:04.869788 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 03:45:04.869800 | orchestrator | Friday 30 January 2026 03:43:58 +0000 (0:00:02.305) 0:00:59.802 ******** 2026-01-30 03:45:04.869813 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:45:04.869826 | orchestrator | 2026-01-30 03:45:04.869837 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 03:45:04.869848 | orchestrator | Friday 30 January 2026 03:44:00 +0000 (0:00:01.137) 0:01:00.940 ******** 2026-01-30 03:45:04.869865 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.869883 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.869901 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.869919 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.869937 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.869954 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.869971 | orchestrator | 2026-01-30 03:45:04.869989 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 03:45:04.870006 | orchestrator | Friday 30 January 2026 03:44:00 +0000 (0:00:00.632) 0:01:01.572 ******** 2026-01-30 03:45:04.870108 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.870176 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.870195 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.870214 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.870232 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.870251 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.870269 | orchestrator | 2026-01-30 03:45:04.870288 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 03:45:04.870303 | orchestrator | Friday 30 January 2026 03:44:01 +0000 (0:00:00.715) 0:01:02.287 ******** 2026-01-30 03:45:04.870314 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870344 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870400 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870413 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870424 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870435 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 03:45:04.870446 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870459 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870470 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870510 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870522 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870533 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 03:45:04.870544 | orchestrator | 2026-01-30 03:45:04.870555 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 03:45:04.870566 | orchestrator | Friday 30 January 2026 03:44:02 +0000 (0:00:01.268) 0:01:03.556 ******** 2026-01-30 03:45:04.870577 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:45:04.870588 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:45:04.870599 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:45:04.870610 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:45:04.870621 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:45:04.870631 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:45:04.870642 | orchestrator | 2026-01-30 03:45:04.870654 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 03:45:04.870665 | orchestrator | Friday 30 January 2026 03:44:03 +0000 (0:00:01.168) 0:01:04.724 ******** 2026-01-30 03:45:04.870676 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.870686 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.870697 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.870708 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.870719 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.870730 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.870740 | orchestrator | 2026-01-30 03:45:04.870751 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 03:45:04.870762 | orchestrator | Friday 30 January 2026 03:44:04 +0000 (0:00:00.579) 0:01:05.304 ******** 2026-01-30 03:45:04.870773 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.870784 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.870795 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.870806 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.870817 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.870828 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.870839 | orchestrator | 2026-01-30 03:45:04.870850 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 03:45:04.870861 | orchestrator | Friday 30 January 2026 03:44:05 +0000 (0:00:00.742) 0:01:06.047 ******** 2026-01-30 03:45:04.870872 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.870883 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.870894 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.870905 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.870916 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:04.870927 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:04.870938 | orchestrator | 2026-01-30 03:45:04.870949 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 03:45:04.870960 | orchestrator | Friday 30 January 2026 03:44:05 +0000 (0:00:00.591) 0:01:06.638 ******** 2026-01-30 03:45:04.870979 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:45:04.870991 | orchestrator | 2026-01-30 03:45:04.871003 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 03:45:04.871013 | orchestrator | Friday 30 January 2026 03:44:07 +0000 (0:00:01.253) 0:01:07.891 ******** 2026-01-30 03:45:04.871024 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:04.871035 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:04.871046 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:04.871057 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:04.871068 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:04.871079 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:04.871090 | orchestrator | 2026-01-30 03:45:04.871101 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 03:45:04.871112 | orchestrator | Friday 30 January 2026 03:45:04 +0000 (0:00:57.168) 0:02:05.060 ******** 2026-01-30 03:45:04.871123 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:04.871134 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:04.871145 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:04.871156 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:04.871167 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:04.871178 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:04.871189 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:04.871201 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:04.871219 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:04.871238 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:04.871265 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:04.871284 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:04.871301 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:04.871320 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:04.871339 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:04.871386 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:04.871405 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:04.871421 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:04.871432 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:04.871452 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.821951 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 03:45:27.822188 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 03:45:27.822213 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 03:45:27.822226 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.822239 | orchestrator | 2026-01-30 03:45:27.822251 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 03:45:27.822262 | orchestrator | Friday 30 January 2026 03:45:04 +0000 (0:00:00.628) 0:02:05.689 ******** 2026-01-30 03:45:27.822273 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.822284 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.822295 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.822306 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.822317 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.822394 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.822408 | orchestrator | 2026-01-30 03:45:27.822419 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 03:45:27.822431 | orchestrator | Friday 30 January 2026 03:45:05 +0000 (0:00:00.769) 0:02:06.458 ******** 2026-01-30 03:45:27.822441 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.822454 | orchestrator | 2026-01-30 03:45:27.822466 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 03:45:27.822479 | orchestrator | Friday 30 January 2026 03:45:05 +0000 (0:00:00.150) 0:02:06.608 ******** 2026-01-30 03:45:27.822491 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.822504 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.822516 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.822528 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.822540 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.822552 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.822564 | orchestrator | 2026-01-30 03:45:27.822576 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 03:45:27.822588 | orchestrator | Friday 30 January 2026 03:45:06 +0000 (0:00:00.593) 0:02:07.202 ******** 2026-01-30 03:45:27.822600 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.822612 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.822624 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.822637 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.822649 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.822663 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.822681 | orchestrator | 2026-01-30 03:45:27.822694 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 03:45:27.822706 | orchestrator | Friday 30 January 2026 03:45:07 +0000 (0:00:00.827) 0:02:08.029 ******** 2026-01-30 03:45:27.822719 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.822731 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.822743 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.822756 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.822769 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.822781 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.822793 | orchestrator | 2026-01-30 03:45:27.822806 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 03:45:27.822818 | orchestrator | Friday 30 January 2026 03:45:07 +0000 (0:00:00.594) 0:02:08.624 ******** 2026-01-30 03:45:27.822830 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:27.822842 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:27.822853 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:27.822863 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:27.822874 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:27.822885 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:27.822895 | orchestrator | 2026-01-30 03:45:27.822906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 03:45:27.822917 | orchestrator | Friday 30 January 2026 03:45:11 +0000 (0:00:03.638) 0:02:12.262 ******** 2026-01-30 03:45:27.822928 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:27.822938 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:27.822949 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:27.822959 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:27.822970 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:27.822980 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:27.822991 | orchestrator | 2026-01-30 03:45:27.823001 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 03:45:27.823012 | orchestrator | Friday 30 January 2026 03:45:12 +0000 (0:00:00.583) 0:02:12.846 ******** 2026-01-30 03:45:27.823024 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:45:27.823037 | orchestrator | 2026-01-30 03:45:27.823048 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 03:45:27.823069 | orchestrator | Friday 30 January 2026 03:45:13 +0000 (0:00:01.165) 0:02:14.011 ******** 2026-01-30 03:45:27.823080 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823091 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823101 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823112 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823137 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823148 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823158 | orchestrator | 2026-01-30 03:45:27.823169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 03:45:27.823180 | orchestrator | Friday 30 January 2026 03:45:13 +0000 (0:00:00.781) 0:02:14.793 ******** 2026-01-30 03:45:27.823191 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823201 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823212 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823223 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823233 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823244 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823255 | orchestrator | 2026-01-30 03:45:27.823265 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 03:45:27.823276 | orchestrator | Friday 30 January 2026 03:45:14 +0000 (0:00:00.561) 0:02:15.355 ******** 2026-01-30 03:45:27.823287 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823320 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823332 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823389 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823403 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823414 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823425 | orchestrator | 2026-01-30 03:45:27.823436 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 03:45:27.823447 | orchestrator | Friday 30 January 2026 03:45:15 +0000 (0:00:00.800) 0:02:16.155 ******** 2026-01-30 03:45:27.823458 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823469 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823479 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823490 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823501 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823511 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823522 | orchestrator | 2026-01-30 03:45:27.823533 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 03:45:27.823544 | orchestrator | Friday 30 January 2026 03:45:15 +0000 (0:00:00.580) 0:02:16.736 ******** 2026-01-30 03:45:27.823555 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823565 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823576 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823587 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823597 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823608 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823619 | orchestrator | 2026-01-30 03:45:27.823629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 03:45:27.823640 | orchestrator | Friday 30 January 2026 03:45:16 +0000 (0:00:00.765) 0:02:17.501 ******** 2026-01-30 03:45:27.823651 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823662 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823672 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823683 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823693 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823704 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823715 | orchestrator | 2026-01-30 03:45:27.823726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 03:45:27.823737 | orchestrator | Friday 30 January 2026 03:45:17 +0000 (0:00:00.571) 0:02:18.072 ******** 2026-01-30 03:45:27.823757 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823768 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823779 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823790 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823800 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823811 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823822 | orchestrator | 2026-01-30 03:45:27.823833 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 03:45:27.823843 | orchestrator | Friday 30 January 2026 03:45:18 +0000 (0:00:00.784) 0:02:18.857 ******** 2026-01-30 03:45:27.823854 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:27.823865 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:27.823876 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:27.823887 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:27.823897 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:27.823908 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:27.823918 | orchestrator | 2026-01-30 03:45:27.823929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 03:45:27.823940 | orchestrator | Friday 30 January 2026 03:45:18 +0000 (0:00:00.604) 0:02:19.461 ******** 2026-01-30 03:45:27.823951 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:27.823962 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:27.823972 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:27.823983 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:45:27.823994 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:45:27.824004 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:45:27.824015 | orchestrator | 2026-01-30 03:45:27.824026 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 03:45:27.824037 | orchestrator | Friday 30 January 2026 03:45:19 +0000 (0:00:01.229) 0:02:20.691 ******** 2026-01-30 03:45:27.824049 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:45:27.824062 | orchestrator | 2026-01-30 03:45:27.824073 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 03:45:27.824084 | orchestrator | Friday 30 January 2026 03:45:21 +0000 (0:00:01.327) 0:02:22.018 ******** 2026-01-30 03:45:27.824095 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-30 03:45:27.824106 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-30 03:45:27.824117 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-30 03:45:27.824127 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-30 03:45:27.824138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824149 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-30 03:45:27.824159 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824176 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-30 03:45:27.824187 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824209 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:27.824221 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:27.824231 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-30 03:45:27.824253 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:27.824263 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:27.824274 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:27.824293 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:32.804181 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:32.804289 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:32.804298 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-30 03:45:32.804305 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804311 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:32.804317 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804323 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804330 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:32.804336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-30 03:45:32.804387 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804395 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804401 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804407 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804413 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804419 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-30 03:45:32.804432 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804438 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804445 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804450 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804457 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-30 03:45:32.804468 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804474 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804480 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804492 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804498 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-30 03:45:32.804504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804515 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804525 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804544 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804553 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-30 03:45:32.804561 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804571 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804582 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804593 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804602 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804611 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804621 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804632 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 03:45:32.804643 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804662 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804683 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804692 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804698 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 03:45:32.804704 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804721 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804728 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804733 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804739 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804746 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804753 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 03:45:32.804760 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804766 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804788 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804794 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804801 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 03:45:32.804815 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-30 03:45:32.804822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804829 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-30 03:45:32.804836 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-30 03:45:32.804842 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-30 03:45:32.804849 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804855 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 03:45:32.804862 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804870 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-30 03:45:32.804880 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-30 03:45:32.804894 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-30 03:45:32.804906 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-30 03:45:32.804914 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 03:45:32.804923 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-30 03:45:32.804932 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-30 03:45:32.804941 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-30 03:45:32.804950 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-30 03:45:32.804959 | orchestrator | 2026-01-30 03:45:32.804970 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 03:45:32.804979 | orchestrator | Friday 30 January 2026 03:45:27 +0000 (0:00:06.612) 0:02:28.630 ******** 2026-01-30 03:45:32.804988 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:32.804996 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:32.805005 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:32.805016 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:45:32.805035 | orchestrator | 2026-01-30 03:45:32.805045 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 03:45:32.805053 | orchestrator | Friday 30 January 2026 03:45:28 +0000 (0:00:00.981) 0:02:29.611 ******** 2026-01-30 03:45:32.805062 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805072 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805082 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805091 | orchestrator | 2026-01-30 03:45:32.805099 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 03:45:32.805108 | orchestrator | Friday 30 January 2026 03:45:29 +0000 (0:00:00.706) 0:02:30.317 ******** 2026-01-30 03:45:32.805117 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805125 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805134 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:32.805142 | orchestrator | 2026-01-30 03:45:32.805151 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 03:45:32.805160 | orchestrator | Friday 30 January 2026 03:45:30 +0000 (0:00:01.212) 0:02:31.530 ******** 2026-01-30 03:45:32.805169 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:32.805178 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:32.805187 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:32.805196 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:32.805205 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:32.805214 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:32.805223 | orchestrator | 2026-01-30 03:45:32.805232 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 03:45:32.805250 | orchestrator | Friday 30 January 2026 03:45:31 +0000 (0:00:00.775) 0:02:32.306 ******** 2026-01-30 03:45:32.805259 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:32.805267 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:32.805276 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:32.805285 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:32.805293 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:32.805302 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:32.805310 | orchestrator | 2026-01-30 03:45:32.805319 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 03:45:32.805327 | orchestrator | Friday 30 January 2026 03:45:32 +0000 (0:00:00.563) 0:02:32.869 ******** 2026-01-30 03:45:32.805337 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:32.805375 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:32.805384 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:32.805393 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:32.805402 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:32.805411 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:32.805420 | orchestrator | 2026-01-30 03:45:32.805439 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 03:45:45.245204 | orchestrator | Friday 30 January 2026 03:45:32 +0000 (0:00:00.750) 0:02:33.620 ******** 2026-01-30 03:45:45.245316 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.245332 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.245412 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.245433 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.245451 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.245466 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.245504 | orchestrator | 2026-01-30 03:45:45.245518 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 03:45:45.245529 | orchestrator | Friday 30 January 2026 03:45:33 +0000 (0:00:00.566) 0:02:34.187 ******** 2026-01-30 03:45:45.245540 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.245551 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.245562 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.245573 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.245584 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.245595 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.245606 | orchestrator | 2026-01-30 03:45:45.245617 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 03:45:45.245630 | orchestrator | Friday 30 January 2026 03:45:34 +0000 (0:00:00.790) 0:02:34.977 ******** 2026-01-30 03:45:45.245640 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.245651 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.245662 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.245673 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.245688 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.245707 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.245724 | orchestrator | 2026-01-30 03:45:45.245742 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 03:45:45.245763 | orchestrator | Friday 30 January 2026 03:45:34 +0000 (0:00:00.554) 0:02:35.531 ******** 2026-01-30 03:45:45.245784 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.245803 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.245821 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.245835 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.245847 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.245859 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.245871 | orchestrator | 2026-01-30 03:45:45.245884 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 03:45:45.245897 | orchestrator | Friday 30 January 2026 03:45:35 +0000 (0:00:00.809) 0:02:36.341 ******** 2026-01-30 03:45:45.245910 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.245922 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.245934 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.245946 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.245958 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.245971 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.245983 | orchestrator | 2026-01-30 03:45:45.245995 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 03:45:45.246007 | orchestrator | Friday 30 January 2026 03:45:36 +0000 (0:00:00.571) 0:02:36.912 ******** 2026-01-30 03:45:45.246077 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246091 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246105 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246118 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:45.246131 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:45.246142 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:45.246153 | orchestrator | 2026-01-30 03:45:45.246164 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 03:45:45.246175 | orchestrator | Friday 30 January 2026 03:45:38 +0000 (0:00:02.851) 0:02:39.764 ******** 2026-01-30 03:45:45.246186 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:45.246197 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:45.246208 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:45.246219 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246230 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246240 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246251 | orchestrator | 2026-01-30 03:45:45.246262 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 03:45:45.246284 | orchestrator | Friday 30 January 2026 03:45:39 +0000 (0:00:00.578) 0:02:40.342 ******** 2026-01-30 03:45:45.246295 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:45:45.246305 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:45:45.246316 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:45:45.246327 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246384 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246397 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246408 | orchestrator | 2026-01-30 03:45:45.246420 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 03:45:45.246431 | orchestrator | Friday 30 January 2026 03:45:40 +0000 (0:00:00.790) 0:02:41.133 ******** 2026-01-30 03:45:45.246442 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.246453 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.246479 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.246490 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246501 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246511 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246523 | orchestrator | 2026-01-30 03:45:45.246534 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 03:45:45.246559 | orchestrator | Friday 30 January 2026 03:45:40 +0000 (0:00:00.580) 0:02:41.713 ******** 2026-01-30 03:45:45.246570 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:45.246584 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:45.246595 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 03:45:45.246606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246638 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246650 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246661 | orchestrator | 2026-01-30 03:45:45.246672 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 03:45:45.246683 | orchestrator | Friday 30 January 2026 03:45:41 +0000 (0:00:00.819) 0:02:42.532 ******** 2026-01-30 03:45:45.246696 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-30 03:45:45.246711 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-30 03:45:45.246723 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.246743 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-30 03:45:45.246762 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-30 03:45:45.246782 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.246802 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-30 03:45:45.246836 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-30 03:45:45.246855 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.246866 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246877 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246888 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246899 | orchestrator | 2026-01-30 03:45:45.246910 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 03:45:45.246921 | orchestrator | Friday 30 January 2026 03:45:42 +0000 (0:00:00.646) 0:02:43.179 ******** 2026-01-30 03:45:45.246931 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.246942 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.246953 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.246964 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.246974 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.246985 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.246996 | orchestrator | 2026-01-30 03:45:45.247007 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 03:45:45.247017 | orchestrator | Friday 30 January 2026 03:45:43 +0000 (0:00:00.778) 0:02:43.957 ******** 2026-01-30 03:45:45.247028 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.247039 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.247049 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.247060 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.247071 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.247081 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.247092 | orchestrator | 2026-01-30 03:45:45.247103 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 03:45:45.247114 | orchestrator | Friday 30 January 2026 03:45:43 +0000 (0:00:00.737) 0:02:44.695 ******** 2026-01-30 03:45:45.247132 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.247143 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.247153 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.247164 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.247175 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.247185 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.247196 | orchestrator | 2026-01-30 03:45:45.247207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 03:45:45.247218 | orchestrator | Friday 30 January 2026 03:45:44 +0000 (0:00:00.602) 0:02:45.298 ******** 2026-01-30 03:45:45.247229 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:45:45.247240 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:45:45.247250 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:45:45.247261 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:45:45.247272 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:45:45.247282 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:45:45.247293 | orchestrator | 2026-01-30 03:45:45.247304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 03:45:45.247323 | orchestrator | Friday 30 January 2026 03:45:45 +0000 (0:00:00.753) 0:02:46.051 ******** 2026-01-30 03:46:01.403116 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.403212 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:01.403242 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:01.403250 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.403258 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:01.403265 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:01.403311 | orchestrator | 2026-01-30 03:46:01.403320 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 03:46:01.403329 | orchestrator | Friday 30 January 2026 03:45:45 +0000 (0:00:00.637) 0:02:46.689 ******** 2026-01-30 03:46:01.403378 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:01.403387 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:01.403394 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:01.403401 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.403408 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:01.403414 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:01.403421 | orchestrator | 2026-01-30 03:46:01.403428 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 03:46:01.403435 | orchestrator | Friday 30 January 2026 03:45:46 +0000 (0:00:00.763) 0:02:47.452 ******** 2026-01-30 03:46:01.403442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:01.403449 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:01.403456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:01.403463 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.403470 | orchestrator | 2026-01-30 03:46:01.403477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 03:46:01.403484 | orchestrator | Friday 30 January 2026 03:45:47 +0000 (0:00:00.405) 0:02:47.858 ******** 2026-01-30 03:46:01.403491 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:01.403497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:01.403504 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:01.403511 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.403518 | orchestrator | 2026-01-30 03:46:01.403525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 03:46:01.403531 | orchestrator | Friday 30 January 2026 03:45:47 +0000 (0:00:00.401) 0:02:48.259 ******** 2026-01-30 03:46:01.403538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:01.403545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:01.403551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:01.403558 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.403565 | orchestrator | 2026-01-30 03:46:01.403572 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 03:46:01.403578 | orchestrator | Friday 30 January 2026 03:45:47 +0000 (0:00:00.397) 0:02:48.657 ******** 2026-01-30 03:46:01.403585 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:01.403592 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:01.403599 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:01.403605 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.403612 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:01.403619 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:01.403626 | orchestrator | 2026-01-30 03:46:01.403632 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 03:46:01.403639 | orchestrator | Friday 30 January 2026 03:45:48 +0000 (0:00:00.603) 0:02:49.260 ******** 2026-01-30 03:46:01.403646 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 03:46:01.403653 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 03:46:01.403660 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 03:46:01.403666 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-30 03:46:01.403673 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.403680 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-30 03:46:01.403687 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:01.403693 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-30 03:46:01.403700 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:01.403708 | orchestrator | 2026-01-30 03:46:01.403719 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 03:46:01.403735 | orchestrator | Friday 30 January 2026 03:45:50 +0000 (0:00:01.651) 0:02:50.912 ******** 2026-01-30 03:46:01.403742 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:46:01.403749 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:46:01.403755 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:46:01.403762 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:01.403769 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:01.403775 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:01.403782 | orchestrator | 2026-01-30 03:46:01.403789 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:46:01.403795 | orchestrator | Friday 30 January 2026 03:45:52 +0000 (0:00:02.607) 0:02:53.519 ******** 2026-01-30 03:46:01.403802 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:46:01.403821 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:46:01.403828 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:46:01.403837 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:01.403849 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:01.403859 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:01.403869 | orchestrator | 2026-01-30 03:46:01.403880 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-30 03:46:01.403892 | orchestrator | Friday 30 January 2026 03:45:53 +0000 (0:00:00.984) 0:02:54.504 ******** 2026-01-30 03:46:01.403904 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.403914 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:01.403924 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:01.403932 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:46:01.403939 | orchestrator | 2026-01-30 03:46:01.403946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-30 03:46:01.403953 | orchestrator | Friday 30 January 2026 03:45:54 +0000 (0:00:01.056) 0:02:55.560 ******** 2026-01-30 03:46:01.403960 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:01.403980 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:01.403988 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:01.403995 | orchestrator | 2026-01-30 03:46:01.404001 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-30 03:46:01.404008 | orchestrator | Friday 30 January 2026 03:45:55 +0000 (0:00:00.326) 0:02:55.887 ******** 2026-01-30 03:46:01.404016 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:01.404022 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:01.404029 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:01.404036 | orchestrator | 2026-01-30 03:46:01.404043 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-30 03:46:01.404050 | orchestrator | Friday 30 January 2026 03:45:56 +0000 (0:00:01.385) 0:02:57.272 ******** 2026-01-30 03:46:01.404056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 03:46:01.404063 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 03:46:01.404070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 03:46:01.404077 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.404084 | orchestrator | 2026-01-30 03:46:01.404090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-30 03:46:01.404097 | orchestrator | Friday 30 January 2026 03:45:57 +0000 (0:00:00.607) 0:02:57.880 ******** 2026-01-30 03:46:01.404104 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:01.404111 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:01.404118 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:01.404125 | orchestrator | 2026-01-30 03:46:01.404131 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-30 03:46:01.404138 | orchestrator | Friday 30 January 2026 03:45:57 +0000 (0:00:00.320) 0:02:58.200 ******** 2026-01-30 03:46:01.404145 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:01.404152 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:01.404159 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:01.404171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:46:01.404178 | orchestrator | 2026-01-30 03:46:01.404185 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-30 03:46:01.404192 | orchestrator | Friday 30 January 2026 03:45:58 +0000 (0:00:00.967) 0:02:59.168 ******** 2026-01-30 03:46:01.404198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:01.404205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:01.404212 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:01.404219 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404226 | orchestrator | 2026-01-30 03:46:01.404233 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-30 03:46:01.404239 | orchestrator | Friday 30 January 2026 03:45:58 +0000 (0:00:00.389) 0:02:59.558 ******** 2026-01-30 03:46:01.404246 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404253 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:01.404260 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:01.404267 | orchestrator | 2026-01-30 03:46:01.404273 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-30 03:46:01.404280 | orchestrator | Friday 30 January 2026 03:45:59 +0000 (0:00:00.309) 0:02:59.868 ******** 2026-01-30 03:46:01.404291 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404302 | orchestrator | 2026-01-30 03:46:01.404315 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-30 03:46:01.404383 | orchestrator | Friday 30 January 2026 03:45:59 +0000 (0:00:00.221) 0:03:00.089 ******** 2026-01-30 03:46:01.404397 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404408 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:01.404418 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:01.404429 | orchestrator | 2026-01-30 03:46:01.404440 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-30 03:46:01.404451 | orchestrator | Friday 30 January 2026 03:45:59 +0000 (0:00:00.493) 0:03:00.583 ******** 2026-01-30 03:46:01.404462 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404472 | orchestrator | 2026-01-30 03:46:01.404482 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-30 03:46:01.404492 | orchestrator | Friday 30 January 2026 03:45:59 +0000 (0:00:00.210) 0:03:00.794 ******** 2026-01-30 03:46:01.404503 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404514 | orchestrator | 2026-01-30 03:46:01.404525 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-30 03:46:01.404536 | orchestrator | Friday 30 January 2026 03:46:00 +0000 (0:00:00.231) 0:03:01.025 ******** 2026-01-30 03:46:01.404547 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404558 | orchestrator | 2026-01-30 03:46:01.404569 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-30 03:46:01.404580 | orchestrator | Friday 30 January 2026 03:46:00 +0000 (0:00:00.141) 0:03:01.166 ******** 2026-01-30 03:46:01.404599 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404610 | orchestrator | 2026-01-30 03:46:01.404621 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-30 03:46:01.404631 | orchestrator | Friday 30 January 2026 03:46:00 +0000 (0:00:00.230) 0:03:01.397 ******** 2026-01-30 03:46:01.404643 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404655 | orchestrator | 2026-01-30 03:46:01.404666 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-30 03:46:01.404678 | orchestrator | Friday 30 January 2026 03:46:00 +0000 (0:00:00.231) 0:03:01.628 ******** 2026-01-30 03:46:01.404689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:01.404700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:01.404707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:01.404723 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:01.404730 | orchestrator | 2026-01-30 03:46:01.404736 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-30 03:46:01.404743 | orchestrator | Friday 30 January 2026 03:46:01 +0000 (0:00:00.379) 0:03:02.008 ******** 2026-01-30 03:46:01.404759 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.158261 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:19.158422 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:19.158445 | orchestrator | 2026-01-30 03:46:19.158459 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-30 03:46:19.158473 | orchestrator | Friday 30 January 2026 03:46:01 +0000 (0:00:00.329) 0:03:02.337 ******** 2026-01-30 03:46:19.158486 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.158499 | orchestrator | 2026-01-30 03:46:19.158512 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-30 03:46:19.158524 | orchestrator | Friday 30 January 2026 03:46:01 +0000 (0:00:00.253) 0:03:02.590 ******** 2026-01-30 03:46:19.158536 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.158548 | orchestrator | 2026-01-30 03:46:19.158562 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-30 03:46:19.158574 | orchestrator | Friday 30 January 2026 03:46:01 +0000 (0:00:00.210) 0:03:02.801 ******** 2026-01-30 03:46:19.158587 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.158599 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.158612 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.158626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:46:19.158639 | orchestrator | 2026-01-30 03:46:19.158651 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-30 03:46:19.158663 | orchestrator | Friday 30 January 2026 03:46:03 +0000 (0:00:01.067) 0:03:03.869 ******** 2026-01-30 03:46:19.158675 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:19.158689 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:19.158701 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:19.158713 | orchestrator | 2026-01-30 03:46:19.158725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-30 03:46:19.158757 | orchestrator | Friday 30 January 2026 03:46:03 +0000 (0:00:00.334) 0:03:04.203 ******** 2026-01-30 03:46:19.158769 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:46:19.158781 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:46:19.158793 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:46:19.158806 | orchestrator | 2026-01-30 03:46:19.158818 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-30 03:46:19.158832 | orchestrator | Friday 30 January 2026 03:46:04 +0000 (0:00:01.538) 0:03:05.741 ******** 2026-01-30 03:46:19.158845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:19.158858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:19.158871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:19.158883 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.158895 | orchestrator | 2026-01-30 03:46:19.158908 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-30 03:46:19.158921 | orchestrator | Friday 30 January 2026 03:46:05 +0000 (0:00:00.610) 0:03:06.351 ******** 2026-01-30 03:46:19.158935 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:19.158947 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:19.158960 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:19.158974 | orchestrator | 2026-01-30 03:46:19.158988 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-30 03:46:19.159001 | orchestrator | Friday 30 January 2026 03:46:05 +0000 (0:00:00.349) 0:03:06.701 ******** 2026-01-30 03:46:19.159014 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.159027 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.159039 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.159085 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:46:19.159098 | orchestrator | 2026-01-30 03:46:19.159110 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-30 03:46:19.159123 | orchestrator | Friday 30 January 2026 03:46:06 +0000 (0:00:01.035) 0:03:07.736 ******** 2026-01-30 03:46:19.159136 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:19.159149 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:19.159161 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:19.159172 | orchestrator | 2026-01-30 03:46:19.159184 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-30 03:46:19.159196 | orchestrator | Friday 30 January 2026 03:46:07 +0000 (0:00:00.307) 0:03:08.044 ******** 2026-01-30 03:46:19.159208 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:46:19.159219 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:46:19.159231 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:46:19.159243 | orchestrator | 2026-01-30 03:46:19.159255 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-30 03:46:19.159267 | orchestrator | Friday 30 January 2026 03:46:08 +0000 (0:00:01.189) 0:03:09.233 ******** 2026-01-30 03:46:19.159279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:46:19.159291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:46:19.159317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:46:19.159351 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.159365 | orchestrator | 2026-01-30 03:46:19.159378 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-30 03:46:19.159392 | orchestrator | Friday 30 January 2026 03:46:09 +0000 (0:00:00.809) 0:03:10.043 ******** 2026-01-30 03:46:19.159400 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:46:19.159407 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:46:19.159415 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:46:19.159422 | orchestrator | 2026-01-30 03:46:19.159429 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-30 03:46:19.159436 | orchestrator | Friday 30 January 2026 03:46:09 +0000 (0:00:00.592) 0:03:10.635 ******** 2026-01-30 03:46:19.159443 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.159451 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:19.159458 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:19.159465 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.159472 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.159479 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.159487 | orchestrator | 2026-01-30 03:46:19.159512 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-30 03:46:19.159520 | orchestrator | Friday 30 January 2026 03:46:10 +0000 (0:00:00.629) 0:03:11.265 ******** 2026-01-30 03:46:19.159527 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:46:19.159534 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:46:19.159541 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:46:19.159548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:46:19.159556 | orchestrator | 2026-01-30 03:46:19.159563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-30 03:46:19.159570 | orchestrator | Friday 30 January 2026 03:46:11 +0000 (0:00:01.025) 0:03:12.290 ******** 2026-01-30 03:46:19.159577 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:19.159584 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:19.159592 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:19.159599 | orchestrator | 2026-01-30 03:46:19.159606 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-30 03:46:19.159613 | orchestrator | Friday 30 January 2026 03:46:11 +0000 (0:00:00.332) 0:03:12.622 ******** 2026-01-30 03:46:19.159621 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:19.159642 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:19.159654 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:19.159665 | orchestrator | 2026-01-30 03:46:19.159677 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-30 03:46:19.159690 | orchestrator | Friday 30 January 2026 03:46:12 +0000 (0:00:01.171) 0:03:13.794 ******** 2026-01-30 03:46:19.159702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 03:46:19.159713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 03:46:19.159726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 03:46:19.159738 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.159750 | orchestrator | 2026-01-30 03:46:19.159762 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-30 03:46:19.159774 | orchestrator | Friday 30 January 2026 03:46:13 +0000 (0:00:00.972) 0:03:14.767 ******** 2026-01-30 03:46:19.159786 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:19.159798 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:19.159810 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:19.159823 | orchestrator | 2026-01-30 03:46:19.159832 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-30 03:46:19.159839 | orchestrator | 2026-01-30 03:46:19.159847 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:46:19.159854 | orchestrator | Friday 30 January 2026 03:46:14 +0000 (0:00:00.596) 0:03:15.364 ******** 2026-01-30 03:46:19.159862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:46:19.159871 | orchestrator | 2026-01-30 03:46:19.159879 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:46:19.159889 | orchestrator | Friday 30 January 2026 03:46:15 +0000 (0:00:00.693) 0:03:16.057 ******** 2026-01-30 03:46:19.159900 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:46:19.159911 | orchestrator | 2026-01-30 03:46:19.159922 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:46:19.159933 | orchestrator | Friday 30 January 2026 03:46:15 +0000 (0:00:00.520) 0:03:16.577 ******** 2026-01-30 03:46:19.160020 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:19.160032 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:19.160043 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:19.160055 | orchestrator | 2026-01-30 03:46:19.160067 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:46:19.160078 | orchestrator | Friday 30 January 2026 03:46:16 +0000 (0:00:00.693) 0:03:17.271 ******** 2026-01-30 03:46:19.160090 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.160101 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.160111 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.160122 | orchestrator | 2026-01-30 03:46:19.160134 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:46:19.160145 | orchestrator | Friday 30 January 2026 03:46:16 +0000 (0:00:00.527) 0:03:17.798 ******** 2026-01-30 03:46:19.160157 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.160169 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.160181 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.160192 | orchestrator | 2026-01-30 03:46:19.160203 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:46:19.160215 | orchestrator | Friday 30 January 2026 03:46:17 +0000 (0:00:00.315) 0:03:18.114 ******** 2026-01-30 03:46:19.160226 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.160237 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.160258 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.160270 | orchestrator | 2026-01-30 03:46:19.160282 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:46:19.160293 | orchestrator | Friday 30 January 2026 03:46:17 +0000 (0:00:00.304) 0:03:18.419 ******** 2026-01-30 03:46:19.160315 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:19.160353 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:19.160367 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:19.160378 | orchestrator | 2026-01-30 03:46:19.160389 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:46:19.160400 | orchestrator | Friday 30 January 2026 03:46:18 +0000 (0:00:00.695) 0:03:19.115 ******** 2026-01-30 03:46:19.160412 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.160423 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.160434 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:19.160443 | orchestrator | 2026-01-30 03:46:19.160450 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:46:19.160457 | orchestrator | Friday 30 January 2026 03:46:18 +0000 (0:00:00.567) 0:03:19.682 ******** 2026-01-30 03:46:19.160464 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:19.160470 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:19.160487 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.718742 | orchestrator | 2026-01-30 03:46:39.718859 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:46:39.718877 | orchestrator | Friday 30 January 2026 03:46:19 +0000 (0:00:00.291) 0:03:19.974 ******** 2026-01-30 03:46:39.718889 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.718902 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.718913 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.718924 | orchestrator | 2026-01-30 03:46:39.718935 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:46:39.718947 | orchestrator | Friday 30 January 2026 03:46:19 +0000 (0:00:00.709) 0:03:20.683 ******** 2026-01-30 03:46:39.718958 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.718969 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.718980 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.718991 | orchestrator | 2026-01-30 03:46:39.719002 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:46:39.719014 | orchestrator | Friday 30 January 2026 03:46:20 +0000 (0:00:00.707) 0:03:21.390 ******** 2026-01-30 03:46:39.719025 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719037 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719049 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719060 | orchestrator | 2026-01-30 03:46:39.719071 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:46:39.719083 | orchestrator | Friday 30 January 2026 03:46:21 +0000 (0:00:00.511) 0:03:21.901 ******** 2026-01-30 03:46:39.719094 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.719106 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.719117 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.719128 | orchestrator | 2026-01-30 03:46:39.719139 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:46:39.719150 | orchestrator | Friday 30 January 2026 03:46:21 +0000 (0:00:00.346) 0:03:22.248 ******** 2026-01-30 03:46:39.719161 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719173 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719184 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719195 | orchestrator | 2026-01-30 03:46:39.719206 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:46:39.719217 | orchestrator | Friday 30 January 2026 03:46:21 +0000 (0:00:00.319) 0:03:22.568 ******** 2026-01-30 03:46:39.719228 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719239 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719251 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719262 | orchestrator | 2026-01-30 03:46:39.719273 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:46:39.719284 | orchestrator | Friday 30 January 2026 03:46:22 +0000 (0:00:00.313) 0:03:22.881 ******** 2026-01-30 03:46:39.719295 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719387 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719402 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719413 | orchestrator | 2026-01-30 03:46:39.719424 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:46:39.719435 | orchestrator | Friday 30 January 2026 03:46:22 +0000 (0:00:00.521) 0:03:23.403 ******** 2026-01-30 03:46:39.719446 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719457 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719468 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719479 | orchestrator | 2026-01-30 03:46:39.719490 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:46:39.719501 | orchestrator | Friday 30 January 2026 03:46:22 +0000 (0:00:00.325) 0:03:23.728 ******** 2026-01-30 03:46:39.719512 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719523 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:46:39.719534 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:46:39.719545 | orchestrator | 2026-01-30 03:46:39.719556 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:46:39.719568 | orchestrator | Friday 30 January 2026 03:46:23 +0000 (0:00:00.305) 0:03:24.034 ******** 2026-01-30 03:46:39.719579 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.719590 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.719601 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.719612 | orchestrator | 2026-01-30 03:46:39.719623 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:46:39.719634 | orchestrator | Friday 30 January 2026 03:46:23 +0000 (0:00:00.343) 0:03:24.378 ******** 2026-01-30 03:46:39.719645 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.719656 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.719667 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.719677 | orchestrator | 2026-01-30 03:46:39.719689 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:46:39.719700 | orchestrator | Friday 30 January 2026 03:46:24 +0000 (0:00:00.546) 0:03:24.924 ******** 2026-01-30 03:46:39.719711 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.719722 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.719732 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.719743 | orchestrator | 2026-01-30 03:46:39.719770 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-30 03:46:39.719782 | orchestrator | Friday 30 January 2026 03:46:24 +0000 (0:00:00.533) 0:03:25.458 ******** 2026-01-30 03:46:39.719793 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.719805 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.719815 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.719826 | orchestrator | 2026-01-30 03:46:39.719837 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-30 03:46:39.719848 | orchestrator | Friday 30 January 2026 03:46:24 +0000 (0:00:00.332) 0:03:25.791 ******** 2026-01-30 03:46:39.719861 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:46:39.719873 | orchestrator | 2026-01-30 03:46:39.719884 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-30 03:46:39.719895 | orchestrator | Friday 30 January 2026 03:46:25 +0000 (0:00:00.790) 0:03:26.581 ******** 2026-01-30 03:46:39.719906 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:46:39.719917 | orchestrator | 2026-01-30 03:46:39.719928 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-30 03:46:39.719957 | orchestrator | Friday 30 January 2026 03:46:25 +0000 (0:00:00.157) 0:03:26.739 ******** 2026-01-30 03:46:39.719969 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 03:46:39.719980 | orchestrator | 2026-01-30 03:46:39.719991 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-30 03:46:39.720002 | orchestrator | Friday 30 January 2026 03:46:26 +0000 (0:00:00.927) 0:03:27.666 ******** 2026-01-30 03:46:39.720021 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720032 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.720043 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.720054 | orchestrator | 2026-01-30 03:46:39.720065 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-30 03:46:39.720077 | orchestrator | Friday 30 January 2026 03:46:27 +0000 (0:00:00.301) 0:03:27.967 ******** 2026-01-30 03:46:39.720088 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720098 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.720109 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.720120 | orchestrator | 2026-01-30 03:46:39.720131 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-30 03:46:39.720142 | orchestrator | Friday 30 January 2026 03:46:27 +0000 (0:00:00.516) 0:03:28.484 ******** 2026-01-30 03:46:39.720153 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720165 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:39.720176 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:39.720187 | orchestrator | 2026-01-30 03:46:39.720198 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-30 03:46:39.720209 | orchestrator | Friday 30 January 2026 03:46:28 +0000 (0:00:01.164) 0:03:29.649 ******** 2026-01-30 03:46:39.720220 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720231 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:39.720242 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:39.720253 | orchestrator | 2026-01-30 03:46:39.720264 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-30 03:46:39.720275 | orchestrator | Friday 30 January 2026 03:46:29 +0000 (0:00:00.777) 0:03:30.427 ******** 2026-01-30 03:46:39.720286 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720297 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:39.720308 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:39.720344 | orchestrator | 2026-01-30 03:46:39.720357 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-30 03:46:39.720369 | orchestrator | Friday 30 January 2026 03:46:30 +0000 (0:00:00.651) 0:03:31.079 ******** 2026-01-30 03:46:39.720380 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720391 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.720402 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.720413 | orchestrator | 2026-01-30 03:46:39.720424 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-30 03:46:39.720435 | orchestrator | Friday 30 January 2026 03:46:31 +0000 (0:00:00.931) 0:03:32.010 ******** 2026-01-30 03:46:39.720446 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720457 | orchestrator | 2026-01-30 03:46:39.720468 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-30 03:46:39.720479 | orchestrator | Friday 30 January 2026 03:46:32 +0000 (0:00:01.258) 0:03:33.268 ******** 2026-01-30 03:46:39.720490 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720500 | orchestrator | 2026-01-30 03:46:39.720511 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-30 03:46:39.720522 | orchestrator | Friday 30 January 2026 03:46:33 +0000 (0:00:00.710) 0:03:33.979 ******** 2026-01-30 03:46:39.720533 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 03:46:39.720544 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:46:39.720555 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:46:39.720566 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:46:39.720578 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-30 03:46:39.720589 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:46:39.720600 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:46:39.720610 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-30 03:46:39.720621 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:46:39.720640 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-30 03:46:39.720651 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-30 03:46:39.720662 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-30 03:46:39.720673 | orchestrator | 2026-01-30 03:46:39.720684 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-30 03:46:39.720695 | orchestrator | Friday 30 January 2026 03:46:36 +0000 (0:00:03.107) 0:03:37.087 ******** 2026-01-30 03:46:39.720706 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720717 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:39.720734 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:39.720745 | orchestrator | 2026-01-30 03:46:39.720756 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-30 03:46:39.720767 | orchestrator | Friday 30 January 2026 03:46:37 +0000 (0:00:01.159) 0:03:38.247 ******** 2026-01-30 03:46:39.720778 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720789 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.720800 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.720812 | orchestrator | 2026-01-30 03:46:39.720830 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-30 03:46:39.720849 | orchestrator | Friday 30 January 2026 03:46:37 +0000 (0:00:00.519) 0:03:38.766 ******** 2026-01-30 03:46:39.720866 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:46:39.720883 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:46:39.720900 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:46:39.720918 | orchestrator | 2026-01-30 03:46:39.720935 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-30 03:46:39.720953 | orchestrator | Friday 30 January 2026 03:46:38 +0000 (0:00:00.377) 0:03:39.143 ******** 2026-01-30 03:46:39.720971 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:46:39.720989 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:46:39.721006 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:46:39.721021 | orchestrator | 2026-01-30 03:46:39.721050 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-30 03:47:41.884029 | orchestrator | Friday 30 January 2026 03:46:39 +0000 (0:00:01.387) 0:03:40.530 ******** 2026-01-30 03:47:41.884141 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:47:41.884158 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:47:41.884170 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:47:41.884181 | orchestrator | 2026-01-30 03:47:41.884193 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-30 03:47:41.884204 | orchestrator | Friday 30 January 2026 03:46:40 +0000 (0:00:01.272) 0:03:41.803 ******** 2026-01-30 03:47:41.884216 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.884227 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.884238 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.884249 | orchestrator | 2026-01-30 03:47:41.884260 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-30 03:47:41.884271 | orchestrator | Friday 30 January 2026 03:46:41 +0000 (0:00:00.495) 0:03:42.299 ******** 2026-01-30 03:47:41.884283 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:47:41.884294 | orchestrator | 2026-01-30 03:47:41.884357 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-30 03:47:41.884370 | orchestrator | Friday 30 January 2026 03:46:42 +0000 (0:00:00.542) 0:03:42.841 ******** 2026-01-30 03:47:41.884381 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.884392 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.884404 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.884415 | orchestrator | 2026-01-30 03:47:41.884426 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-30 03:47:41.884437 | orchestrator | Friday 30 January 2026 03:46:42 +0000 (0:00:00.286) 0:03:43.127 ******** 2026-01-30 03:47:41.884448 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.884490 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.884502 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.884513 | orchestrator | 2026-01-30 03:47:41.884524 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-30 03:47:41.884535 | orchestrator | Friday 30 January 2026 03:46:42 +0000 (0:00:00.456) 0:03:43.584 ******** 2026-01-30 03:47:41.884549 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:47:41.884571 | orchestrator | 2026-01-30 03:47:41.884589 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-30 03:47:41.884608 | orchestrator | Friday 30 January 2026 03:46:43 +0000 (0:00:00.510) 0:03:44.095 ******** 2026-01-30 03:47:41.884627 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:47:41.884645 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:47:41.884666 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:47:41.884686 | orchestrator | 2026-01-30 03:47:41.884705 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-30 03:47:41.884721 | orchestrator | Friday 30 January 2026 03:46:44 +0000 (0:00:01.698) 0:03:45.794 ******** 2026-01-30 03:47:41.884734 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:47:41.884746 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:47:41.884759 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:47:41.884771 | orchestrator | 2026-01-30 03:47:41.884784 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-30 03:47:41.884797 | orchestrator | Friday 30 January 2026 03:46:46 +0000 (0:00:01.376) 0:03:47.171 ******** 2026-01-30 03:47:41.884809 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:47:41.884821 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:47:41.884833 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:47:41.884846 | orchestrator | 2026-01-30 03:47:41.884858 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-30 03:47:41.884871 | orchestrator | Friday 30 January 2026 03:46:48 +0000 (0:00:01.747) 0:03:48.919 ******** 2026-01-30 03:47:41.884883 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:47:41.884895 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:47:41.884908 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:47:41.884919 | orchestrator | 2026-01-30 03:47:41.884931 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-30 03:47:41.884942 | orchestrator | Friday 30 January 2026 03:46:50 +0000 (0:00:01.998) 0:03:50.917 ******** 2026-01-30 03:47:41.884953 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:47:41.884964 | orchestrator | 2026-01-30 03:47:41.884975 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-30 03:47:41.884985 | orchestrator | Friday 30 January 2026 03:46:50 +0000 (0:00:00.733) 0:03:51.650 ******** 2026-01-30 03:47:41.885012 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-30 03:47:41.885023 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:47:41.885035 | orchestrator | 2026-01-30 03:47:41.885046 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-30 03:47:41.885057 | orchestrator | Friday 30 January 2026 03:47:12 +0000 (0:00:21.965) 0:04:13.616 ******** 2026-01-30 03:47:41.885068 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:47:41.885079 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:47:41.885090 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:47:41.885101 | orchestrator | 2026-01-30 03:47:41.885112 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-30 03:47:41.885123 | orchestrator | Friday 30 January 2026 03:47:22 +0000 (0:00:09.495) 0:04:23.111 ******** 2026-01-30 03:47:41.885134 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.885145 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.885156 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.885176 | orchestrator | 2026-01-30 03:47:41.885187 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-30 03:47:41.885198 | orchestrator | Friday 30 January 2026 03:47:22 +0000 (0:00:00.346) 0:04:23.458 ******** 2026-01-30 03:47:41.885231 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-30 03:47:41.885245 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-30 03:47:41.885259 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-30 03:47:41.885272 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-30 03:47:41.885284 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-30 03:47:41.885296 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5e236837a165d3e5a9dcc3b905035b6c834d2bbb'}])  2026-01-30 03:47:41.885341 | orchestrator | 2026-01-30 03:47:41.885356 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:47:41.885367 | orchestrator | Friday 30 January 2026 03:47:38 +0000 (0:00:15.595) 0:04:39.053 ******** 2026-01-30 03:47:41.885378 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.885389 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.885400 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.885411 | orchestrator | 2026-01-30 03:47:41.885422 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-30 03:47:41.885433 | orchestrator | Friday 30 January 2026 03:47:38 +0000 (0:00:00.419) 0:04:39.473 ******** 2026-01-30 03:47:41.885444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:47:41.885454 | orchestrator | 2026-01-30 03:47:41.885465 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-30 03:47:41.885476 | orchestrator | Friday 30 January 2026 03:47:39 +0000 (0:00:00.844) 0:04:40.317 ******** 2026-01-30 03:47:41.885487 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:47:41.885497 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:47:41.885509 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:47:41.885519 | orchestrator | 2026-01-30 03:47:41.885530 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-30 03:47:41.885549 | orchestrator | Friday 30 January 2026 03:47:39 +0000 (0:00:00.356) 0:04:40.674 ******** 2026-01-30 03:47:41.885566 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.885577 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:47:41.885588 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:47:41.885599 | orchestrator | 2026-01-30 03:47:41.885610 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-30 03:47:41.885621 | orchestrator | Friday 30 January 2026 03:47:40 +0000 (0:00:00.346) 0:04:41.021 ******** 2026-01-30 03:47:41.885632 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 03:47:41.885643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 03:47:41.885653 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 03:47:41.885664 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:47:41.885675 | orchestrator | 2026-01-30 03:47:41.885686 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-30 03:47:41.885697 | orchestrator | Friday 30 January 2026 03:47:41 +0000 (0:00:00.873) 0:04:41.895 ******** 2026-01-30 03:47:41.885707 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:47:41.885718 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:47:41.885729 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:47:41.885740 | orchestrator | 2026-01-30 03:47:41.885751 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-30 03:47:41.885762 | orchestrator | 2026-01-30 03:47:41.885780 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:48:08.744805 | orchestrator | Friday 30 January 2026 03:47:41 +0000 (0:00:00.797) 0:04:42.692 ******** 2026-01-30 03:48:08.744917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:48:08.744935 | orchestrator | 2026-01-30 03:48:08.744948 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:48:08.744959 | orchestrator | Friday 30 January 2026 03:47:42 +0000 (0:00:00.542) 0:04:43.234 ******** 2026-01-30 03:48:08.744971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:48:08.744982 | orchestrator | 2026-01-30 03:48:08.744993 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:48:08.745004 | orchestrator | Friday 30 January 2026 03:47:43 +0000 (0:00:00.845) 0:04:44.080 ******** 2026-01-30 03:48:08.745016 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.745028 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.745057 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.745078 | orchestrator | 2026-01-30 03:48:08.745089 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:48:08.745100 | orchestrator | Friday 30 January 2026 03:47:44 +0000 (0:00:00.747) 0:04:44.827 ******** 2026-01-30 03:48:08.745112 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745124 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745136 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745147 | orchestrator | 2026-01-30 03:48:08.745158 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:48:08.745169 | orchestrator | Friday 30 January 2026 03:47:44 +0000 (0:00:00.303) 0:04:45.130 ******** 2026-01-30 03:48:08.745180 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745191 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745201 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745212 | orchestrator | 2026-01-30 03:48:08.745223 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:48:08.745234 | orchestrator | Friday 30 January 2026 03:47:44 +0000 (0:00:00.323) 0:04:45.453 ******** 2026-01-30 03:48:08.745245 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745257 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745339 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745356 | orchestrator | 2026-01-30 03:48:08.745368 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:48:08.745381 | orchestrator | Friday 30 January 2026 03:47:45 +0000 (0:00:00.549) 0:04:46.003 ******** 2026-01-30 03:48:08.745394 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.745406 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.745418 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.745430 | orchestrator | 2026-01-30 03:48:08.745443 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:48:08.745456 | orchestrator | Friday 30 January 2026 03:47:45 +0000 (0:00:00.738) 0:04:46.741 ******** 2026-01-30 03:48:08.745469 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745481 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745493 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745506 | orchestrator | 2026-01-30 03:48:08.745518 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:48:08.745532 | orchestrator | Friday 30 January 2026 03:47:46 +0000 (0:00:00.319) 0:04:47.061 ******** 2026-01-30 03:48:08.745544 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745557 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745569 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745581 | orchestrator | 2026-01-30 03:48:08.745594 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:48:08.745606 | orchestrator | Friday 30 January 2026 03:47:46 +0000 (0:00:00.288) 0:04:47.349 ******** 2026-01-30 03:48:08.745618 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.745629 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.745640 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.745651 | orchestrator | 2026-01-30 03:48:08.745662 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:48:08.745673 | orchestrator | Friday 30 January 2026 03:47:47 +0000 (0:00:01.097) 0:04:48.447 ******** 2026-01-30 03:48:08.745684 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.745695 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.745706 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.745717 | orchestrator | 2026-01-30 03:48:08.745728 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:48:08.745739 | orchestrator | Friday 30 January 2026 03:47:48 +0000 (0:00:00.782) 0:04:49.229 ******** 2026-01-30 03:48:08.745750 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745761 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745788 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745799 | orchestrator | 2026-01-30 03:48:08.745810 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:48:08.745821 | orchestrator | Friday 30 January 2026 03:47:48 +0000 (0:00:00.313) 0:04:49.542 ******** 2026-01-30 03:48:08.745832 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.745843 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.745854 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.745865 | orchestrator | 2026-01-30 03:48:08.745876 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:48:08.745887 | orchestrator | Friday 30 January 2026 03:47:49 +0000 (0:00:00.319) 0:04:49.862 ******** 2026-01-30 03:48:08.745898 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.745911 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.745929 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.745948 | orchestrator | 2026-01-30 03:48:08.745966 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:48:08.745982 | orchestrator | Friday 30 January 2026 03:47:49 +0000 (0:00:00.528) 0:04:50.390 ******** 2026-01-30 03:48:08.746000 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.746089 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.746111 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.746130 | orchestrator | 2026-01-30 03:48:08.746244 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:48:08.746271 | orchestrator | Friday 30 January 2026 03:47:49 +0000 (0:00:00.298) 0:04:50.688 ******** 2026-01-30 03:48:08.746322 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.746342 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.746362 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.746381 | orchestrator | 2026-01-30 03:48:08.746399 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:48:08.746418 | orchestrator | Friday 30 January 2026 03:47:50 +0000 (0:00:00.287) 0:04:50.976 ******** 2026-01-30 03:48:08.746437 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.746455 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.746474 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.746493 | orchestrator | 2026-01-30 03:48:08.746512 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:48:08.746531 | orchestrator | Friday 30 January 2026 03:47:50 +0000 (0:00:00.309) 0:04:51.285 ******** 2026-01-30 03:48:08.746549 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.746567 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.746586 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.746604 | orchestrator | 2026-01-30 03:48:08.746623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:48:08.746643 | orchestrator | Friday 30 January 2026 03:47:50 +0000 (0:00:00.535) 0:04:51.821 ******** 2026-01-30 03:48:08.746660 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.746680 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.746698 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.746716 | orchestrator | 2026-01-30 03:48:08.746735 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:48:08.746754 | orchestrator | Friday 30 January 2026 03:47:51 +0000 (0:00:00.335) 0:04:52.157 ******** 2026-01-30 03:48:08.746772 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.746791 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.746810 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.746828 | orchestrator | 2026-01-30 03:48:08.746846 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:48:08.746864 | orchestrator | Friday 30 January 2026 03:47:51 +0000 (0:00:00.326) 0:04:52.483 ******** 2026-01-30 03:48:08.746882 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.746900 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.746918 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.746937 | orchestrator | 2026-01-30 03:48:08.746956 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 03:48:08.746974 | orchestrator | Friday 30 January 2026 03:47:52 +0000 (0:00:00.849) 0:04:53.332 ******** 2026-01-30 03:48:08.746993 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 03:48:08.747012 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:48:08.747031 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:48:08.747050 | orchestrator | 2026-01-30 03:48:08.747068 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 03:48:08.747086 | orchestrator | Friday 30 January 2026 03:47:53 +0000 (0:00:00.632) 0:04:53.965 ******** 2026-01-30 03:48:08.747105 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:48:08.747123 | orchestrator | 2026-01-30 03:48:08.747142 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-30 03:48:08.747159 | orchestrator | Friday 30 January 2026 03:47:53 +0000 (0:00:00.566) 0:04:54.531 ******** 2026-01-30 03:48:08.747177 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:48:08.747196 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:48:08.747214 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:48:08.747231 | orchestrator | 2026-01-30 03:48:08.747250 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-30 03:48:08.747284 | orchestrator | Friday 30 January 2026 03:47:54 +0000 (0:00:00.978) 0:04:55.509 ******** 2026-01-30 03:48:08.747374 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:48:08.747394 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:48:08.747413 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:48:08.747427 | orchestrator | 2026-01-30 03:48:08.747438 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-30 03:48:08.747449 | orchestrator | Friday 30 January 2026 03:47:55 +0000 (0:00:00.321) 0:04:55.830 ******** 2026-01-30 03:48:08.747460 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 03:48:08.747471 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 03:48:08.747482 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 03:48:08.747493 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-30 03:48:08.747504 | orchestrator | 2026-01-30 03:48:08.747524 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-30 03:48:08.747535 | orchestrator | Friday 30 January 2026 03:48:05 +0000 (0:00:10.910) 0:05:06.741 ******** 2026-01-30 03:48:08.747546 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:48:08.747557 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:48:08.747568 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:48:08.747578 | orchestrator | 2026-01-30 03:48:08.747589 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-30 03:48:08.747600 | orchestrator | Friday 30 January 2026 03:48:06 +0000 (0:00:00.363) 0:05:07.104 ******** 2026-01-30 03:48:08.747610 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-30 03:48:08.747621 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 03:48:08.747632 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 03:48:08.747642 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 03:48:08.747653 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:48:08.747664 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:48:08.747674 | orchestrator | 2026-01-30 03:48:08.747685 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-30 03:48:08.747707 | orchestrator | Friday 30 January 2026 03:48:08 +0000 (0:00:02.453) 0:05:09.557 ******** 2026-01-30 03:49:10.534158 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-30 03:49:10.534254 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 03:49:10.534334 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 03:49:10.534353 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 03:49:10.534366 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-30 03:49:10.534378 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-30 03:49:10.534391 | orchestrator | 2026-01-30 03:49:10.534404 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-30 03:49:10.534417 | orchestrator | Friday 30 January 2026 03:48:10 +0000 (0:00:01.592) 0:05:11.150 ******** 2026-01-30 03:49:10.534430 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:49:10.534443 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:49:10.534455 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:49:10.534467 | orchestrator | 2026-01-30 03:49:10.534479 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-30 03:49:10.534491 | orchestrator | Friday 30 January 2026 03:48:11 +0000 (0:00:00.683) 0:05:11.834 ******** 2026-01-30 03:49:10.534503 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.534516 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:49:10.534529 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:49:10.534542 | orchestrator | 2026-01-30 03:49:10.534554 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 03:49:10.534566 | orchestrator | Friday 30 January 2026 03:48:11 +0000 (0:00:00.335) 0:05:12.169 ******** 2026-01-30 03:49:10.534601 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.534614 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:49:10.534627 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:49:10.534639 | orchestrator | 2026-01-30 03:49:10.534651 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 03:49:10.534663 | orchestrator | Friday 30 January 2026 03:48:11 +0000 (0:00:00.312) 0:05:12.482 ******** 2026-01-30 03:49:10.534675 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:49:10.534687 | orchestrator | 2026-01-30 03:49:10.534699 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-30 03:49:10.534711 | orchestrator | Friday 30 January 2026 03:48:12 +0000 (0:00:00.738) 0:05:13.221 ******** 2026-01-30 03:49:10.534723 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.534736 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:49:10.534749 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:49:10.534761 | orchestrator | 2026-01-30 03:49:10.534773 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-30 03:49:10.534785 | orchestrator | Friday 30 January 2026 03:48:12 +0000 (0:00:00.313) 0:05:13.534 ******** 2026-01-30 03:49:10.534797 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.534809 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:49:10.534822 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:49:10.534835 | orchestrator | 2026-01-30 03:49:10.534847 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-30 03:49:10.534860 | orchestrator | Friday 30 January 2026 03:48:13 +0000 (0:00:00.312) 0:05:13.847 ******** 2026-01-30 03:49:10.534872 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:49:10.534885 | orchestrator | 2026-01-30 03:49:10.534897 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-30 03:49:10.534909 | orchestrator | Friday 30 January 2026 03:48:13 +0000 (0:00:00.741) 0:05:14.589 ******** 2026-01-30 03:49:10.534922 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.534935 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.534947 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.534959 | orchestrator | 2026-01-30 03:49:10.534971 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-30 03:49:10.534984 | orchestrator | Friday 30 January 2026 03:48:14 +0000 (0:00:01.216) 0:05:15.805 ******** 2026-01-30 03:49:10.534996 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.535008 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.535021 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.535033 | orchestrator | 2026-01-30 03:49:10.535045 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-30 03:49:10.535058 | orchestrator | Friday 30 January 2026 03:48:16 +0000 (0:00:01.163) 0:05:16.969 ******** 2026-01-30 03:49:10.535070 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.535083 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.535096 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.535108 | orchestrator | 2026-01-30 03:49:10.535121 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-30 03:49:10.535144 | orchestrator | Friday 30 January 2026 03:48:18 +0000 (0:00:02.189) 0:05:19.158 ******** 2026-01-30 03:49:10.535156 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.535168 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.535180 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.535192 | orchestrator | 2026-01-30 03:49:10.535204 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 03:49:10.535234 | orchestrator | Friday 30 January 2026 03:48:20 +0000 (0:00:02.064) 0:05:21.222 ******** 2026-01-30 03:49:10.535247 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.535259 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:49:10.535271 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-30 03:49:10.535307 | orchestrator | 2026-01-30 03:49:10.535319 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-30 03:49:10.535331 | orchestrator | Friday 30 January 2026 03:48:20 +0000 (0:00:00.397) 0:05:21.620 ******** 2026-01-30 03:49:10.535344 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-30 03:49:10.535357 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-30 03:49:10.535385 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-30 03:49:10.535397 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-30 03:49:10.535409 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-30 03:49:10.535421 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:49:10.535433 | orchestrator | 2026-01-30 03:49:10.535446 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-30 03:49:10.535458 | orchestrator | Friday 30 January 2026 03:48:51 +0000 (0:00:30.471) 0:05:52.092 ******** 2026-01-30 03:49:10.535470 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:49:10.535482 | orchestrator | 2026-01-30 03:49:10.535494 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-30 03:49:10.535506 | orchestrator | Friday 30 January 2026 03:48:52 +0000 (0:00:01.619) 0:05:53.712 ******** 2026-01-30 03:49:10.535518 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:49:10.535530 | orchestrator | 2026-01-30 03:49:10.535543 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-30 03:49:10.535555 | orchestrator | Friday 30 January 2026 03:48:53 +0000 (0:00:00.722) 0:05:54.434 ******** 2026-01-30 03:49:10.535567 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:49:10.535579 | orchestrator | 2026-01-30 03:49:10.535591 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-30 03:49:10.535603 | orchestrator | Friday 30 January 2026 03:48:53 +0000 (0:00:00.154) 0:05:54.589 ******** 2026-01-30 03:49:10.535615 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-30 03:49:10.535627 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-30 03:49:10.535640 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-30 03:49:10.535652 | orchestrator | 2026-01-30 03:49:10.535664 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-30 03:49:10.535676 | orchestrator | Friday 30 January 2026 03:49:00 +0000 (0:00:06.449) 0:06:01.038 ******** 2026-01-30 03:49:10.535688 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-30 03:49:10.535700 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-30 03:49:10.535712 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-30 03:49:10.535724 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-30 03:49:10.535737 | orchestrator | 2026-01-30 03:49:10.535749 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:49:10.535761 | orchestrator | Friday 30 January 2026 03:49:05 +0000 (0:00:04.934) 0:06:05.973 ******** 2026-01-30 03:49:10.535773 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.535785 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.535797 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.535809 | orchestrator | 2026-01-30 03:49:10.535821 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-30 03:49:10.535833 | orchestrator | Friday 30 January 2026 03:49:06 +0000 (0:00:00.953) 0:06:06.926 ******** 2026-01-30 03:49:10.535845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:49:10.535864 | orchestrator | 2026-01-30 03:49:10.535876 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-30 03:49:10.535888 | orchestrator | Friday 30 January 2026 03:49:06 +0000 (0:00:00.557) 0:06:07.483 ******** 2026-01-30 03:49:10.535900 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:49:10.535912 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:49:10.535924 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:49:10.535937 | orchestrator | 2026-01-30 03:49:10.535949 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-30 03:49:10.535961 | orchestrator | Friday 30 January 2026 03:49:07 +0000 (0:00:00.362) 0:06:07.845 ******** 2026-01-30 03:49:10.535973 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:49:10.535985 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:49:10.535997 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:49:10.536009 | orchestrator | 2026-01-30 03:49:10.536022 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-30 03:49:10.536034 | orchestrator | Friday 30 January 2026 03:49:08 +0000 (0:00:01.526) 0:06:09.372 ******** 2026-01-30 03:49:10.536052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 03:49:10.536061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 03:49:10.536068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 03:49:10.536075 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:49:10.536082 | orchestrator | 2026-01-30 03:49:10.536090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-30 03:49:10.536097 | orchestrator | Friday 30 January 2026 03:49:09 +0000 (0:00:00.711) 0:06:10.083 ******** 2026-01-30 03:49:10.536106 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:49:10.536119 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:49:10.536131 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:49:10.536142 | orchestrator | 2026-01-30 03:49:10.536154 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-30 03:49:10.536167 | orchestrator | 2026-01-30 03:49:10.536180 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:49:10.536192 | orchestrator | Friday 30 January 2026 03:49:09 +0000 (0:00:00.559) 0:06:10.643 ******** 2026-01-30 03:49:10.536205 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:49:10.536219 | orchestrator | 2026-01-30 03:49:10.536231 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:49:10.536251 | orchestrator | Friday 30 January 2026 03:49:10 +0000 (0:00:00.705) 0:06:11.349 ******** 2026-01-30 03:49:27.315490 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:49:27.315599 | orchestrator | 2026-01-30 03:49:27.315614 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:49:27.315628 | orchestrator | Friday 30 January 2026 03:49:11 +0000 (0:00:00.562) 0:06:11.912 ******** 2026-01-30 03:49:27.315640 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.315652 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.315663 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.315674 | orchestrator | 2026-01-30 03:49:27.315685 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:49:27.315697 | orchestrator | Friday 30 January 2026 03:49:11 +0000 (0:00:00.653) 0:06:12.565 ******** 2026-01-30 03:49:27.315708 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.315719 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.315730 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.315741 | orchestrator | 2026-01-30 03:49:27.315752 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:49:27.315763 | orchestrator | Friday 30 January 2026 03:49:12 +0000 (0:00:00.698) 0:06:13.263 ******** 2026-01-30 03:49:27.315804 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.315815 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.315826 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.315837 | orchestrator | 2026-01-30 03:49:27.315848 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:49:27.315859 | orchestrator | Friday 30 January 2026 03:49:13 +0000 (0:00:00.684) 0:06:13.948 ******** 2026-01-30 03:49:27.315870 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.315881 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.315892 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.315902 | orchestrator | 2026-01-30 03:49:27.315913 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:49:27.315924 | orchestrator | Friday 30 January 2026 03:49:13 +0000 (0:00:00.708) 0:06:14.656 ******** 2026-01-30 03:49:27.315935 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.315946 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.315957 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.315968 | orchestrator | 2026-01-30 03:49:27.315979 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:49:27.315993 | orchestrator | Friday 30 January 2026 03:49:14 +0000 (0:00:00.503) 0:06:15.160 ******** 2026-01-30 03:49:27.316006 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316018 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316031 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316043 | orchestrator | 2026-01-30 03:49:27.316056 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:49:27.316069 | orchestrator | Friday 30 January 2026 03:49:14 +0000 (0:00:00.336) 0:06:15.497 ******** 2026-01-30 03:49:27.316081 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316093 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316106 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316118 | orchestrator | 2026-01-30 03:49:27.316131 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:49:27.316143 | orchestrator | Friday 30 January 2026 03:49:14 +0000 (0:00:00.304) 0:06:15.801 ******** 2026-01-30 03:49:27.316156 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316171 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316189 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316209 | orchestrator | 2026-01-30 03:49:27.316227 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:49:27.316246 | orchestrator | Friday 30 January 2026 03:49:15 +0000 (0:00:00.699) 0:06:16.500 ******** 2026-01-30 03:49:27.316265 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316284 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316297 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316310 | orchestrator | 2026-01-30 03:49:27.316323 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:49:27.316365 | orchestrator | Friday 30 January 2026 03:49:16 +0000 (0:00:01.069) 0:06:17.570 ******** 2026-01-30 03:49:27.316378 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316390 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316400 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316411 | orchestrator | 2026-01-30 03:49:27.316422 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:49:27.316433 | orchestrator | Friday 30 January 2026 03:49:17 +0000 (0:00:00.334) 0:06:17.904 ******** 2026-01-30 03:49:27.316444 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316455 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316465 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316476 | orchestrator | 2026-01-30 03:49:27.316503 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:49:27.316515 | orchestrator | Friday 30 January 2026 03:49:17 +0000 (0:00:00.303) 0:06:18.207 ******** 2026-01-30 03:49:27.316526 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316536 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316556 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316567 | orchestrator | 2026-01-30 03:49:27.316578 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:49:27.316589 | orchestrator | Friday 30 January 2026 03:49:17 +0000 (0:00:00.326) 0:06:18.534 ******** 2026-01-30 03:49:27.316600 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316611 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316622 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316633 | orchestrator | 2026-01-30 03:49:27.316644 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:49:27.316655 | orchestrator | Friday 30 January 2026 03:49:18 +0000 (0:00:00.569) 0:06:19.103 ******** 2026-01-30 03:49:27.316666 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316677 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316687 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316698 | orchestrator | 2026-01-30 03:49:27.316709 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:49:27.316720 | orchestrator | Friday 30 January 2026 03:49:18 +0000 (0:00:00.341) 0:06:19.444 ******** 2026-01-30 03:49:27.316731 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316760 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316772 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316783 | orchestrator | 2026-01-30 03:49:27.316794 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:49:27.316804 | orchestrator | Friday 30 January 2026 03:49:18 +0000 (0:00:00.305) 0:06:19.749 ******** 2026-01-30 03:49:27.316815 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316826 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316837 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316848 | orchestrator | 2026-01-30 03:49:27.316859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:49:27.316869 | orchestrator | Friday 30 January 2026 03:49:19 +0000 (0:00:00.282) 0:06:20.032 ******** 2026-01-30 03:49:27.316880 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.316891 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.316902 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.316913 | orchestrator | 2026-01-30 03:49:27.316924 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:49:27.316934 | orchestrator | Friday 30 January 2026 03:49:19 +0000 (0:00:00.504) 0:06:20.537 ******** 2026-01-30 03:49:27.316945 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.316956 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.316967 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.316977 | orchestrator | 2026-01-30 03:49:27.316988 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:49:27.316999 | orchestrator | Friday 30 January 2026 03:49:20 +0000 (0:00:00.360) 0:06:20.897 ******** 2026-01-30 03:49:27.317010 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.317021 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.317032 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.317042 | orchestrator | 2026-01-30 03:49:27.317053 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-30 03:49:27.317064 | orchestrator | Friday 30 January 2026 03:49:20 +0000 (0:00:00.523) 0:06:21.421 ******** 2026-01-30 03:49:27.317074 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.317085 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.317096 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.317106 | orchestrator | 2026-01-30 03:49:27.317117 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-30 03:49:27.317128 | orchestrator | Friday 30 January 2026 03:49:21 +0000 (0:00:00.526) 0:06:21.948 ******** 2026-01-30 03:49:27.317138 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:49:27.317150 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:49:27.317168 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:49:27.317187 | orchestrator | 2026-01-30 03:49:27.317204 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-30 03:49:27.317223 | orchestrator | Friday 30 January 2026 03:49:21 +0000 (0:00:00.649) 0:06:22.597 ******** 2026-01-30 03:49:27.317240 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:49:27.317258 | orchestrator | 2026-01-30 03:49:27.317276 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-30 03:49:27.317293 | orchestrator | Friday 30 January 2026 03:49:22 +0000 (0:00:00.523) 0:06:23.121 ******** 2026-01-30 03:49:27.317310 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.317328 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.317383 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.317402 | orchestrator | 2026-01-30 03:49:27.317420 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-30 03:49:27.317440 | orchestrator | Friday 30 January 2026 03:49:22 +0000 (0:00:00.275) 0:06:23.397 ******** 2026-01-30 03:49:27.317458 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:49:27.317476 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:49:27.317487 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:49:27.317499 | orchestrator | 2026-01-30 03:49:27.317517 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-30 03:49:27.317535 | orchestrator | Friday 30 January 2026 03:49:23 +0000 (0:00:00.522) 0:06:23.919 ******** 2026-01-30 03:49:27.317553 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.317570 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.317581 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.317592 | orchestrator | 2026-01-30 03:49:27.317603 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-30 03:49:27.317614 | orchestrator | Friday 30 January 2026 03:49:23 +0000 (0:00:00.635) 0:06:24.555 ******** 2026-01-30 03:49:27.317625 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:49:27.317644 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:49:27.317655 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:49:27.317666 | orchestrator | 2026-01-30 03:49:27.317677 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-30 03:49:27.317688 | orchestrator | Friday 30 January 2026 03:49:24 +0000 (0:00:00.321) 0:06:24.876 ******** 2026-01-30 03:49:27.317699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 03:49:27.317710 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 03:49:27.317721 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 03:49:27.317732 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 03:49:27.317745 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 03:49:27.317763 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 03:49:27.317782 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 03:49:27.317799 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 03:49:27.317821 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 03:50:38.835346 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 03:50:38.835427 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 03:50:38.835435 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 03:50:38.835441 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 03:50:38.835463 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 03:50:38.835469 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 03:50:38.835475 | orchestrator | 2026-01-30 03:50:38.835481 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-30 03:50:38.835486 | orchestrator | Friday 30 January 2026 03:49:27 +0000 (0:00:03.248) 0:06:28.125 ******** 2026-01-30 03:50:38.835492 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:50:38.835499 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:50:38.835504 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:50:38.835509 | orchestrator | 2026-01-30 03:50:38.835514 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-30 03:50:38.835520 | orchestrator | Friday 30 January 2026 03:49:27 +0000 (0:00:00.308) 0:06:28.433 ******** 2026-01-30 03:50:38.835525 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:50:38.835531 | orchestrator | 2026-01-30 03:50:38.835536 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-30 03:50:38.835541 | orchestrator | Friday 30 January 2026 03:49:28 +0000 (0:00:00.512) 0:06:28.945 ******** 2026-01-30 03:50:38.835546 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 03:50:38.835552 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 03:50:38.835557 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 03:50:38.835562 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-30 03:50:38.835568 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-30 03:50:38.835573 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-30 03:50:38.835578 | orchestrator | 2026-01-30 03:50:38.835583 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-30 03:50:38.835588 | orchestrator | Friday 30 January 2026 03:49:29 +0000 (0:00:01.218) 0:06:30.163 ******** 2026-01-30 03:50:38.835593 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:50:38.835598 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:50:38.835603 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:50:38.835609 | orchestrator | 2026-01-30 03:50:38.835614 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-30 03:50:38.835619 | orchestrator | Friday 30 January 2026 03:49:31 +0000 (0:00:02.326) 0:06:32.490 ******** 2026-01-30 03:50:38.835624 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 03:50:38.835629 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:50:38.835635 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:50:38.835680 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 03:50:38.835687 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 03:50:38.835692 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:50:38.835697 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 03:50:38.835702 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 03:50:38.835707 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:50:38.835713 | orchestrator | 2026-01-30 03:50:38.835718 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-30 03:50:38.835723 | orchestrator | Friday 30 January 2026 03:49:32 +0000 (0:00:01.116) 0:06:33.606 ******** 2026-01-30 03:50:38.835728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:50:38.835734 | orchestrator | 2026-01-30 03:50:38.835739 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-30 03:50:38.835754 | orchestrator | Friday 30 January 2026 03:49:34 +0000 (0:00:02.075) 0:06:35.682 ******** 2026-01-30 03:50:38.835759 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:50:38.835769 | orchestrator | 2026-01-30 03:50:38.835774 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-30 03:50:38.835779 | orchestrator | Friday 30 January 2026 03:49:35 +0000 (0:00:00.530) 0:06:36.213 ******** 2026-01-30 03:50:38.835786 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}) 2026-01-30 03:50:38.835792 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}) 2026-01-30 03:50:38.835797 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}) 2026-01-30 03:50:38.835802 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}) 2026-01-30 03:50:38.835821 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}) 2026-01-30 03:50:38.835830 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}) 2026-01-30 03:50:38.835838 | orchestrator | 2026-01-30 03:50:38.835847 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-30 03:50:38.835855 | orchestrator | Friday 30 January 2026 03:50:21 +0000 (0:00:46.479) 0:07:22.692 ******** 2026-01-30 03:50:38.835863 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:50:38.835870 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:50:38.835878 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:50:38.835885 | orchestrator | 2026-01-30 03:50:38.835895 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-30 03:50:38.835903 | orchestrator | Friday 30 January 2026 03:50:22 +0000 (0:00:00.351) 0:07:23.044 ******** 2026-01-30 03:50:38.835912 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:50:38.835921 | orchestrator | 2026-01-30 03:50:38.835929 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-30 03:50:38.835937 | orchestrator | Friday 30 January 2026 03:50:23 +0000 (0:00:00.842) 0:07:23.887 ******** 2026-01-30 03:50:38.835946 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:50:38.835956 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:50:38.835977 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:50:38.835984 | orchestrator | 2026-01-30 03:50:38.835991 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-30 03:50:38.835997 | orchestrator | Friday 30 January 2026 03:50:23 +0000 (0:00:00.670) 0:07:24.558 ******** 2026-01-30 03:50:38.836003 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:50:38.836009 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:50:38.836015 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:50:38.836020 | orchestrator | 2026-01-30 03:50:38.836025 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-30 03:50:38.836030 | orchestrator | Friday 30 January 2026 03:50:26 +0000 (0:00:02.579) 0:07:27.137 ******** 2026-01-30 03:50:38.836035 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:50:38.836040 | orchestrator | 2026-01-30 03:50:38.836045 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-30 03:50:38.836057 | orchestrator | Friday 30 January 2026 03:50:27 +0000 (0:00:00.728) 0:07:27.866 ******** 2026-01-30 03:50:38.836062 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:50:38.836068 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:50:38.836076 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:50:38.836084 | orchestrator | 2026-01-30 03:50:38.836100 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-30 03:50:38.836107 | orchestrator | Friday 30 January 2026 03:50:28 +0000 (0:00:01.228) 0:07:29.094 ******** 2026-01-30 03:50:38.836115 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:50:38.836123 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:50:38.836130 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:50:38.836138 | orchestrator | 2026-01-30 03:50:38.836146 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-30 03:50:38.836155 | orchestrator | Friday 30 January 2026 03:50:29 +0000 (0:00:01.122) 0:07:30.216 ******** 2026-01-30 03:50:38.836163 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:50:38.836171 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:50:38.836179 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:50:38.836187 | orchestrator | 2026-01-30 03:50:38.836192 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-30 03:50:38.836197 | orchestrator | Friday 30 January 2026 03:50:31 +0000 (0:00:01.742) 0:07:31.959 ******** 2026-01-30 03:50:38.836202 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:50:38.836208 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:50:38.836213 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:50:38.836218 | orchestrator | 2026-01-30 03:50:38.836231 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-30 03:50:38.836237 | orchestrator | Friday 30 January 2026 03:50:31 +0000 (0:00:00.528) 0:07:32.488 ******** 2026-01-30 03:50:38.836242 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:50:38.836247 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:50:38.836252 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:50:38.836257 | orchestrator | 2026-01-30 03:50:38.836262 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-30 03:50:38.836273 | orchestrator | Friday 30 January 2026 03:50:31 +0000 (0:00:00.337) 0:07:32.825 ******** 2026-01-30 03:50:38.836278 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-30 03:50:38.836283 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-30 03:50:38.836288 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-30 03:50:38.836293 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 03:50:38.836298 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-30 03:50:38.836303 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-30 03:50:38.836308 | orchestrator | 2026-01-30 03:50:38.836314 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-30 03:50:38.836319 | orchestrator | Friday 30 January 2026 03:50:33 +0000 (0:00:01.023) 0:07:33.848 ******** 2026-01-30 03:50:38.836324 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-30 03:50:38.836329 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-30 03:50:38.836334 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-30 03:50:38.836339 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-30 03:50:38.836345 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-30 03:50:38.836350 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-30 03:50:38.836359 | orchestrator | 2026-01-30 03:50:38.836370 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-30 03:50:38.836382 | orchestrator | Friday 30 January 2026 03:50:35 +0000 (0:00:02.041) 0:07:35.890 ******** 2026-01-30 03:50:38.836392 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-30 03:50:38.836401 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-30 03:50:38.836419 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-30 03:51:07.484058 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-30 03:51:07.484152 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-30 03:51:07.484164 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-30 03:51:07.484174 | orchestrator | 2026-01-30 03:51:07.484183 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-30 03:51:07.484193 | orchestrator | Friday 30 January 2026 03:50:38 +0000 (0:00:03.757) 0:07:39.648 ******** 2026-01-30 03:51:07.484224 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484233 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484241 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:51:07.484250 | orchestrator | 2026-01-30 03:51:07.484258 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-30 03:51:07.484266 | orchestrator | Friday 30 January 2026 03:50:41 +0000 (0:00:02.313) 0:07:41.961 ******** 2026-01-30 03:51:07.484274 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484282 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484290 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-30 03:51:07.484300 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:51:07.484308 | orchestrator | 2026-01-30 03:51:07.484316 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-30 03:51:07.484324 | orchestrator | Friday 30 January 2026 03:50:53 +0000 (0:00:12.483) 0:07:54.445 ******** 2026-01-30 03:51:07.484332 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484340 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484348 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.484356 | orchestrator | 2026-01-30 03:51:07.484365 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:51:07.484373 | orchestrator | Friday 30 January 2026 03:50:54 +0000 (0:00:00.882) 0:07:55.328 ******** 2026-01-30 03:51:07.484381 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484389 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484397 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.484419 | orchestrator | 2026-01-30 03:51:07.484427 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-30 03:51:07.484435 | orchestrator | Friday 30 January 2026 03:50:54 +0000 (0:00:00.296) 0:07:55.624 ******** 2026-01-30 03:51:07.484443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:51:07.484452 | orchestrator | 2026-01-30 03:51:07.484460 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-30 03:51:07.484468 | orchestrator | Friday 30 January 2026 03:50:55 +0000 (0:00:00.609) 0:07:56.234 ******** 2026-01-30 03:51:07.484476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:51:07.484484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:51:07.484492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:51:07.484500 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484508 | orchestrator | 2026-01-30 03:51:07.484516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-30 03:51:07.484524 | orchestrator | Friday 30 January 2026 03:50:55 +0000 (0:00:00.370) 0:07:56.604 ******** 2026-01-30 03:51:07.484532 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484539 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484547 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.484569 | orchestrator | 2026-01-30 03:51:07.484583 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-30 03:51:07.484596 | orchestrator | Friday 30 January 2026 03:50:56 +0000 (0:00:00.267) 0:07:56.872 ******** 2026-01-30 03:51:07.484624 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484639 | orchestrator | 2026-01-30 03:51:07.484652 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-30 03:51:07.484665 | orchestrator | Friday 30 January 2026 03:50:56 +0000 (0:00:00.203) 0:07:57.075 ******** 2026-01-30 03:51:07.484679 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484692 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.484704 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.484718 | orchestrator | 2026-01-30 03:51:07.484732 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-30 03:51:07.484796 | orchestrator | Friday 30 January 2026 03:50:56 +0000 (0:00:00.272) 0:07:57.347 ******** 2026-01-30 03:51:07.484811 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484825 | orchestrator | 2026-01-30 03:51:07.484838 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-30 03:51:07.484849 | orchestrator | Friday 30 January 2026 03:50:56 +0000 (0:00:00.200) 0:07:57.547 ******** 2026-01-30 03:51:07.484858 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484866 | orchestrator | 2026-01-30 03:51:07.484875 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-30 03:51:07.484884 | orchestrator | Friday 30 January 2026 03:50:57 +0000 (0:00:00.544) 0:07:58.092 ******** 2026-01-30 03:51:07.484893 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484901 | orchestrator | 2026-01-30 03:51:07.484910 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-30 03:51:07.484919 | orchestrator | Friday 30 January 2026 03:50:57 +0000 (0:00:00.120) 0:07:58.213 ******** 2026-01-30 03:51:07.484939 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484948 | orchestrator | 2026-01-30 03:51:07.484956 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-30 03:51:07.484965 | orchestrator | Friday 30 January 2026 03:50:57 +0000 (0:00:00.202) 0:07:58.415 ******** 2026-01-30 03:51:07.484974 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.484983 | orchestrator | 2026-01-30 03:51:07.484992 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-30 03:51:07.485001 | orchestrator | Friday 30 January 2026 03:50:57 +0000 (0:00:00.219) 0:07:58.635 ******** 2026-01-30 03:51:07.485010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:51:07.485037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:51:07.485046 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:51:07.485055 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485064 | orchestrator | 2026-01-30 03:51:07.485072 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-30 03:51:07.485081 | orchestrator | Friday 30 January 2026 03:50:58 +0000 (0:00:00.378) 0:07:59.013 ******** 2026-01-30 03:51:07.485090 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485099 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.485107 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.485129 | orchestrator | 2026-01-30 03:51:07.485138 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-30 03:51:07.485147 | orchestrator | Friday 30 January 2026 03:50:58 +0000 (0:00:00.265) 0:07:59.279 ******** 2026-01-30 03:51:07.485156 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485164 | orchestrator | 2026-01-30 03:51:07.485173 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-30 03:51:07.485182 | orchestrator | Friday 30 January 2026 03:50:58 +0000 (0:00:00.204) 0:07:59.483 ******** 2026-01-30 03:51:07.485190 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485199 | orchestrator | 2026-01-30 03:51:07.485207 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-30 03:51:07.485216 | orchestrator | 2026-01-30 03:51:07.485225 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:51:07.485234 | orchestrator | Friday 30 January 2026 03:50:59 +0000 (0:00:00.783) 0:08:00.267 ******** 2026-01-30 03:51:07.485243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:51:07.485254 | orchestrator | 2026-01-30 03:51:07.485262 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:51:07.485271 | orchestrator | Friday 30 January 2026 03:51:00 +0000 (0:00:01.125) 0:08:01.392 ******** 2026-01-30 03:51:07.485280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:51:07.485297 | orchestrator | 2026-01-30 03:51:07.485306 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:51:07.485315 | orchestrator | Friday 30 January 2026 03:51:01 +0000 (0:00:01.172) 0:08:02.565 ******** 2026-01-30 03:51:07.485324 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485332 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.485341 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.485350 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:07.485359 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:07.485367 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:07.485376 | orchestrator | 2026-01-30 03:51:07.485385 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:51:07.485395 | orchestrator | Friday 30 January 2026 03:51:02 +0000 (0:00:01.022) 0:08:03.587 ******** 2026-01-30 03:51:07.485411 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:07.485432 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:07.485450 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:07.485464 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:07.485477 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:07.485491 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:07.485503 | orchestrator | 2026-01-30 03:51:07.485517 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:51:07.485531 | orchestrator | Friday 30 January 2026 03:51:03 +0000 (0:00:00.963) 0:08:04.550 ******** 2026-01-30 03:51:07.485544 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:07.485557 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:07.485571 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:07.485584 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:07.485598 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:07.485613 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:07.485627 | orchestrator | 2026-01-30 03:51:07.485641 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:51:07.485655 | orchestrator | Friday 30 January 2026 03:51:04 +0000 (0:00:00.700) 0:08:05.250 ******** 2026-01-30 03:51:07.485670 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:07.485684 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:07.485700 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:07.485725 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:07.485741 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:07.485778 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:07.485793 | orchestrator | 2026-01-30 03:51:07.485808 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:51:07.485821 | orchestrator | Friday 30 January 2026 03:51:05 +0000 (0:00:00.944) 0:08:06.195 ******** 2026-01-30 03:51:07.485836 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485849 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.485865 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.485879 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:07.485893 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:07.485907 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:07.485921 | orchestrator | 2026-01-30 03:51:07.485934 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:51:07.485949 | orchestrator | Friday 30 January 2026 03:51:06 +0000 (0:00:00.973) 0:08:07.169 ******** 2026-01-30 03:51:07.485962 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.485976 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:07.485990 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:07.486005 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:07.486094 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:07.486113 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:07.486128 | orchestrator | 2026-01-30 03:51:07.486142 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:51:07.486171 | orchestrator | Friday 30 January 2026 03:51:07 +0000 (0:00:00.815) 0:08:07.984 ******** 2026-01-30 03:51:07.486186 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:07.486217 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.024541 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.024651 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.024666 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.024676 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.024686 | orchestrator | 2026-01-30 03:51:38.024696 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:51:38.024707 | orchestrator | Friday 30 January 2026 03:51:07 +0000 (0:00:00.648) 0:08:08.633 ******** 2026-01-30 03:51:38.024716 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.024726 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.024735 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.024744 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.024752 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.024761 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.024770 | orchestrator | 2026-01-30 03:51:38.024778 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:51:38.024786 | orchestrator | Friday 30 January 2026 03:51:09 +0000 (0:00:01.404) 0:08:10.037 ******** 2026-01-30 03:51:38.024795 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.024803 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.024811 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.024819 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.024828 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.024836 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.024846 | orchestrator | 2026-01-30 03:51:38.024854 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:51:38.024912 | orchestrator | Friday 30 January 2026 03:51:10 +0000 (0:00:01.051) 0:08:11.089 ******** 2026-01-30 03:51:38.024925 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:38.024935 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.024943 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.024952 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.024961 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.024970 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.024979 | orchestrator | 2026-01-30 03:51:38.024988 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:51:38.024997 | orchestrator | Friday 30 January 2026 03:51:11 +0000 (0:00:00.795) 0:08:11.885 ******** 2026-01-30 03:51:38.025005 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:38.025014 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.025022 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.025031 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.025040 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.025049 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.025058 | orchestrator | 2026-01-30 03:51:38.025067 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:51:38.025075 | orchestrator | Friday 30 January 2026 03:51:11 +0000 (0:00:00.568) 0:08:12.453 ******** 2026-01-30 03:51:38.025084 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.025091 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.025097 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.025103 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.025109 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.025116 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.025122 | orchestrator | 2026-01-30 03:51:38.025128 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:51:38.025134 | orchestrator | Friday 30 January 2026 03:51:12 +0000 (0:00:00.792) 0:08:13.246 ******** 2026-01-30 03:51:38.025140 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.025146 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.025153 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.025177 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.025183 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.025190 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.025195 | orchestrator | 2026-01-30 03:51:38.025201 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:51:38.025208 | orchestrator | Friday 30 January 2026 03:51:12 +0000 (0:00:00.544) 0:08:13.790 ******** 2026-01-30 03:51:38.025213 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.025219 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.025226 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.025232 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.025238 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.025244 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.025250 | orchestrator | 2026-01-30 03:51:38.025255 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:51:38.025262 | orchestrator | Friday 30 January 2026 03:51:13 +0000 (0:00:00.814) 0:08:14.605 ******** 2026-01-30 03:51:38.025268 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:38.025274 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.025280 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.025286 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.025292 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.025298 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.025304 | orchestrator | 2026-01-30 03:51:38.025310 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:51:38.025316 | orchestrator | Friday 30 January 2026 03:51:14 +0000 (0:00:00.877) 0:08:15.483 ******** 2026-01-30 03:51:38.025322 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:38.025328 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.025334 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.025340 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:51:38.025346 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:51:38.025352 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:51:38.025358 | orchestrator | 2026-01-30 03:51:38.025367 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:51:38.025375 | orchestrator | Friday 30 January 2026 03:51:15 +0000 (0:00:00.675) 0:08:16.158 ******** 2026-01-30 03:51:38.025387 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:51:38.025398 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:51:38.025406 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:51:38.025414 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.025422 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.025430 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.025439 | orchestrator | 2026-01-30 03:51:38.025447 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:51:38.025455 | orchestrator | Friday 30 January 2026 03:51:16 +0000 (0:00:00.828) 0:08:16.987 ******** 2026-01-30 03:51:38.025463 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.025488 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.025498 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.025507 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.025514 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.025522 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.025531 | orchestrator | 2026-01-30 03:51:38.025539 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:51:38.025548 | orchestrator | Friday 30 January 2026 03:51:16 +0000 (0:00:00.618) 0:08:17.605 ******** 2026-01-30 03:51:38.025594 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.025600 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.025605 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.025611 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.025616 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.025625 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.025633 | orchestrator | 2026-01-30 03:51:38.025642 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-30 03:51:38.025659 | orchestrator | Friday 30 January 2026 03:51:18 +0000 (0:00:01.232) 0:08:18.838 ******** 2026-01-30 03:51:38.025667 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:51:38.025675 | orchestrator | 2026-01-30 03:51:38.025684 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-30 03:51:38.025693 | orchestrator | Friday 30 January 2026 03:51:22 +0000 (0:00:04.132) 0:08:22.971 ******** 2026-01-30 03:51:38.025702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:51:38.025711 | orchestrator | 2026-01-30 03:51:38.025719 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-30 03:51:38.025728 | orchestrator | Friday 30 January 2026 03:51:24 +0000 (0:00:02.113) 0:08:25.084 ******** 2026-01-30 03:51:38.025734 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:51:38.025739 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:51:38.025745 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:51:38.025750 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.025755 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:51:38.025760 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:51:38.025766 | orchestrator | 2026-01-30 03:51:38.025771 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-30 03:51:38.025776 | orchestrator | Friday 30 January 2026 03:51:25 +0000 (0:00:01.700) 0:08:26.785 ******** 2026-01-30 03:51:38.025782 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:51:38.025787 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:51:38.025795 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:51:38.025806 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:51:38.025817 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:51:38.025825 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:51:38.025834 | orchestrator | 2026-01-30 03:51:38.025842 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-30 03:51:38.025849 | orchestrator | Friday 30 January 2026 03:51:26 +0000 (0:00:00.971) 0:08:27.757 ******** 2026-01-30 03:51:38.025859 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:51:38.025915 | orchestrator | 2026-01-30 03:51:38.025924 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-30 03:51:38.025933 | orchestrator | Friday 30 January 2026 03:51:28 +0000 (0:00:01.394) 0:08:29.152 ******** 2026-01-30 03:51:38.025941 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:51:38.025950 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:51:38.025958 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:51:38.025966 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:51:38.025974 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:51:38.025982 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:51:38.025990 | orchestrator | 2026-01-30 03:51:38.025998 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-30 03:51:38.026006 | orchestrator | Friday 30 January 2026 03:51:30 +0000 (0:00:01.767) 0:08:30.919 ******** 2026-01-30 03:51:38.026090 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:51:38.026103 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:51:38.026112 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:51:38.026121 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:51:38.026129 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:51:38.026138 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:51:38.026147 | orchestrator | 2026-01-30 03:51:38.026152 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-30 03:51:38.026164 | orchestrator | Friday 30 January 2026 03:51:33 +0000 (0:00:03.264) 0:08:34.183 ******** 2026-01-30 03:51:38.026169 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:51:38.026183 | orchestrator | 2026-01-30 03:51:38.026188 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-30 03:51:38.026193 | orchestrator | Friday 30 January 2026 03:51:34 +0000 (0:00:01.281) 0:08:35.465 ******** 2026-01-30 03:51:38.026215 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:51:38.026221 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:51:38.026226 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:51:38.026231 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:51:38.026237 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:51:38.026241 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:51:38.026247 | orchestrator | 2026-01-30 03:51:38.026252 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-30 03:51:38.026257 | orchestrator | Friday 30 January 2026 03:51:35 +0000 (0:00:00.814) 0:08:36.280 ******** 2026-01-30 03:51:38.026262 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:51:38.026267 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:51:38.026272 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:51:38.026277 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:51:38.026283 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:51:38.026288 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:51:38.026293 | orchestrator | 2026-01-30 03:51:38.026298 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-30 03:51:38.026303 | orchestrator | Friday 30 January 2026 03:51:37 +0000 (0:00:02.106) 0:08:38.387 ******** 2026-01-30 03:51:38.026318 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.178925 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.179102 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.179129 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:52:05.179148 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:52:05.179167 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:52:05.179188 | orchestrator | 2026-01-30 03:52:05.179209 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-30 03:52:05.179230 | orchestrator | 2026-01-30 03:52:05.179250 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:52:05.179268 | orchestrator | Friday 30 January 2026 03:51:38 +0000 (0:00:01.134) 0:08:39.521 ******** 2026-01-30 03:52:05.179288 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:05.179310 | orchestrator | 2026-01-30 03:52:05.179328 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:52:05.179348 | orchestrator | Friday 30 January 2026 03:51:39 +0000 (0:00:00.726) 0:08:40.248 ******** 2026-01-30 03:52:05.179370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:05.179391 | orchestrator | 2026-01-30 03:52:05.179413 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:52:05.179427 | orchestrator | Friday 30 January 2026 03:51:39 +0000 (0:00:00.517) 0:08:40.765 ******** 2026-01-30 03:52:05.179441 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.179456 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.179470 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.179482 | orchestrator | 2026-01-30 03:52:05.179494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:52:05.179507 | orchestrator | Friday 30 January 2026 03:51:40 +0000 (0:00:00.295) 0:08:41.061 ******** 2026-01-30 03:52:05.179520 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.179532 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.179543 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.179559 | orchestrator | 2026-01-30 03:52:05.179575 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:52:05.179592 | orchestrator | Friday 30 January 2026 03:51:41 +0000 (0:00:00.952) 0:08:42.013 ******** 2026-01-30 03:52:05.179609 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.179625 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.179677 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.179695 | orchestrator | 2026-01-30 03:52:05.179711 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:52:05.179728 | orchestrator | Friday 30 January 2026 03:51:41 +0000 (0:00:00.704) 0:08:42.718 ******** 2026-01-30 03:52:05.179745 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.179760 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.179775 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.179789 | orchestrator | 2026-01-30 03:52:05.179805 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:52:05.179820 | orchestrator | Friday 30 January 2026 03:51:42 +0000 (0:00:00.681) 0:08:43.400 ******** 2026-01-30 03:52:05.179835 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.179849 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.179865 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.179880 | orchestrator | 2026-01-30 03:52:05.179895 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:52:05.179909 | orchestrator | Friday 30 January 2026 03:51:42 +0000 (0:00:00.289) 0:08:43.690 ******** 2026-01-30 03:52:05.179924 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.179939 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.179954 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180029 | orchestrator | 2026-01-30 03:52:05.180045 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:52:05.180060 | orchestrator | Friday 30 January 2026 03:51:43 +0000 (0:00:00.526) 0:08:44.216 ******** 2026-01-30 03:52:05.180076 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180092 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180108 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180123 | orchestrator | 2026-01-30 03:52:05.180140 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:52:05.180156 | orchestrator | Friday 30 January 2026 03:51:43 +0000 (0:00:00.287) 0:08:44.504 ******** 2026-01-30 03:52:05.180172 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180188 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180204 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.180221 | orchestrator | 2026-01-30 03:52:05.180257 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:52:05.180275 | orchestrator | Friday 30 January 2026 03:51:44 +0000 (0:00:00.729) 0:08:45.233 ******** 2026-01-30 03:52:05.180291 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180307 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180323 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.180339 | orchestrator | 2026-01-30 03:52:05.180355 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:52:05.180372 | orchestrator | Friday 30 January 2026 03:51:45 +0000 (0:00:00.937) 0:08:46.171 ******** 2026-01-30 03:52:05.180389 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180406 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180422 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180436 | orchestrator | 2026-01-30 03:52:05.180446 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:52:05.180455 | orchestrator | Friday 30 January 2026 03:51:45 +0000 (0:00:00.303) 0:08:46.474 ******** 2026-01-30 03:52:05.180465 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180474 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180484 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180494 | orchestrator | 2026-01-30 03:52:05.180503 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:52:05.180513 | orchestrator | Friday 30 January 2026 03:51:45 +0000 (0:00:00.329) 0:08:46.804 ******** 2026-01-30 03:52:05.180523 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180532 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180542 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.180551 | orchestrator | 2026-01-30 03:52:05.180584 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:52:05.180607 | orchestrator | Friday 30 January 2026 03:51:46 +0000 (0:00:00.323) 0:08:47.128 ******** 2026-01-30 03:52:05.180617 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180627 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180637 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.180646 | orchestrator | 2026-01-30 03:52:05.180656 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:52:05.180666 | orchestrator | Friday 30 January 2026 03:51:46 +0000 (0:00:00.537) 0:08:47.665 ******** 2026-01-30 03:52:05.180676 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180685 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180695 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.180705 | orchestrator | 2026-01-30 03:52:05.180714 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:52:05.180724 | orchestrator | Friday 30 January 2026 03:51:47 +0000 (0:00:00.335) 0:08:48.001 ******** 2026-01-30 03:52:05.180734 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180743 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180753 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180763 | orchestrator | 2026-01-30 03:52:05.180773 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:52:05.180782 | orchestrator | Friday 30 January 2026 03:51:47 +0000 (0:00:00.292) 0:08:48.293 ******** 2026-01-30 03:52:05.180792 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180802 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180811 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180821 | orchestrator | 2026-01-30 03:52:05.180830 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:52:05.180840 | orchestrator | Friday 30 January 2026 03:51:47 +0000 (0:00:00.316) 0:08:48.610 ******** 2026-01-30 03:52:05.180850 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.180860 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.180871 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.180887 | orchestrator | 2026-01-30 03:52:05.180903 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:52:05.180918 | orchestrator | Friday 30 January 2026 03:51:48 +0000 (0:00:00.534) 0:08:49.145 ******** 2026-01-30 03:52:05.180935 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.180953 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.180994 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.181011 | orchestrator | 2026-01-30 03:52:05.181021 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:52:05.181031 | orchestrator | Friday 30 January 2026 03:51:48 +0000 (0:00:00.336) 0:08:49.481 ******** 2026-01-30 03:52:05.181040 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:05.181050 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:05.181060 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:05.181070 | orchestrator | 2026-01-30 03:52:05.181079 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-30 03:52:05.181089 | orchestrator | Friday 30 January 2026 03:51:49 +0000 (0:00:00.535) 0:08:50.017 ******** 2026-01-30 03:52:05.181099 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:05.181109 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:05.181119 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-30 03:52:05.181129 | orchestrator | 2026-01-30 03:52:05.181141 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-30 03:52:05.181158 | orchestrator | Friday 30 January 2026 03:51:49 +0000 (0:00:00.611) 0:08:50.629 ******** 2026-01-30 03:52:05.181173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:52:05.181188 | orchestrator | 2026-01-30 03:52:05.181203 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-30 03:52:05.181218 | orchestrator | Friday 30 January 2026 03:51:52 +0000 (0:00:02.255) 0:08:52.884 ******** 2026-01-30 03:52:05.181282 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-30 03:52:05.181302 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:05.181312 | orchestrator | 2026-01-30 03:52:05.181322 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-30 03:52:05.181340 | orchestrator | Friday 30 January 2026 03:51:52 +0000 (0:00:00.215) 0:08:53.100 ******** 2026-01-30 03:52:05.181352 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:52:05.181371 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:52:05.181381 | orchestrator | 2026-01-30 03:52:05.181391 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-30 03:52:05.181401 | orchestrator | Friday 30 January 2026 03:52:00 +0000 (0:00:08.478) 0:09:01.578 ******** 2026-01-30 03:52:05.181410 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 03:52:05.181420 | orchestrator | 2026-01-30 03:52:05.181430 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-30 03:52:05.181439 | orchestrator | Friday 30 January 2026 03:52:04 +0000 (0:00:03.688) 0:09:05.266 ******** 2026-01-30 03:52:05.181449 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:05.181459 | orchestrator | 2026-01-30 03:52:05.181477 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-30 03:52:31.459021 | orchestrator | Friday 30 January 2026 03:52:05 +0000 (0:00:00.724) 0:09:05.991 ******** 2026-01-30 03:52:31.459241 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 03:52:31.459261 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 03:52:31.459273 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 03:52:31.459285 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-30 03:52:31.459297 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-30 03:52:31.459308 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-30 03:52:31.459320 | orchestrator | 2026-01-30 03:52:31.459332 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-30 03:52:31.459343 | orchestrator | Friday 30 January 2026 03:52:06 +0000 (0:00:01.053) 0:09:07.045 ******** 2026-01-30 03:52:31.459354 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:31.459366 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:52:31.459377 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:52:31.459389 | orchestrator | 2026-01-30 03:52:31.459400 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-30 03:52:31.459410 | orchestrator | Friday 30 January 2026 03:52:08 +0000 (0:00:02.163) 0:09:09.209 ******** 2026-01-30 03:52:31.459422 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 03:52:31.459434 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:52:31.459445 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.459456 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 03:52:31.459467 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 03:52:31.459478 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.459515 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 03:52:31.459526 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 03:52:31.459537 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.459550 | orchestrator | 2026-01-30 03:52:31.459563 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-30 03:52:31.459575 | orchestrator | Friday 30 January 2026 03:52:09 +0000 (0:00:01.201) 0:09:10.411 ******** 2026-01-30 03:52:31.459587 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.459599 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.459611 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.459624 | orchestrator | 2026-01-30 03:52:31.459637 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-30 03:52:31.459649 | orchestrator | Friday 30 January 2026 03:52:12 +0000 (0:00:02.536) 0:09:12.947 ******** 2026-01-30 03:52:31.459661 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:31.459673 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:31.459685 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:31.459698 | orchestrator | 2026-01-30 03:52:31.459710 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-30 03:52:31.459721 | orchestrator | Friday 30 January 2026 03:52:12 +0000 (0:00:00.539) 0:09:13.486 ******** 2026-01-30 03:52:31.459732 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:31.459743 | orchestrator | 2026-01-30 03:52:31.459754 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-30 03:52:31.459765 | orchestrator | Friday 30 January 2026 03:52:13 +0000 (0:00:00.539) 0:09:14.026 ******** 2026-01-30 03:52:31.459775 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:31.459787 | orchestrator | 2026-01-30 03:52:31.459798 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-30 03:52:31.459809 | orchestrator | Friday 30 January 2026 03:52:13 +0000 (0:00:00.714) 0:09:14.741 ******** 2026-01-30 03:52:31.459820 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.459831 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.459841 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.459852 | orchestrator | 2026-01-30 03:52:31.459876 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-30 03:52:31.459887 | orchestrator | Friday 30 January 2026 03:52:15 +0000 (0:00:01.221) 0:09:15.962 ******** 2026-01-30 03:52:31.459898 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.459909 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.459920 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.459931 | orchestrator | 2026-01-30 03:52:31.459941 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-30 03:52:31.459953 | orchestrator | Friday 30 January 2026 03:52:16 +0000 (0:00:01.134) 0:09:17.096 ******** 2026-01-30 03:52:31.459963 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.459974 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.459985 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.459996 | orchestrator | 2026-01-30 03:52:31.460006 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-30 03:52:31.460017 | orchestrator | Friday 30 January 2026 03:52:18 +0000 (0:00:01.978) 0:09:19.074 ******** 2026-01-30 03:52:31.460028 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.460039 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.460077 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.460090 | orchestrator | 2026-01-30 03:52:31.460101 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-30 03:52:31.460112 | orchestrator | Friday 30 January 2026 03:52:21 +0000 (0:00:02.895) 0:09:21.970 ******** 2026-01-30 03:52:31.460123 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.460134 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.460153 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.460164 | orchestrator | 2026-01-30 03:52:31.460175 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:52:31.460205 | orchestrator | Friday 30 January 2026 03:52:22 +0000 (0:00:01.409) 0:09:23.380 ******** 2026-01-30 03:52:31.460217 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.460227 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.460238 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.460249 | orchestrator | 2026-01-30 03:52:31.460260 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-30 03:52:31.460270 | orchestrator | Friday 30 January 2026 03:52:23 +0000 (0:00:00.673) 0:09:24.053 ******** 2026-01-30 03:52:31.460281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:31.460292 | orchestrator | 2026-01-30 03:52:31.460302 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-30 03:52:31.460313 | orchestrator | Friday 30 January 2026 03:52:23 +0000 (0:00:00.518) 0:09:24.572 ******** 2026-01-30 03:52:31.460324 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.460335 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.460351 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.460370 | orchestrator | 2026-01-30 03:52:31.460397 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-30 03:52:31.460419 | orchestrator | Friday 30 January 2026 03:52:24 +0000 (0:00:00.500) 0:09:25.073 ******** 2026-01-30 03:52:31.460437 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:31.460456 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:31.460475 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:31.460493 | orchestrator | 2026-01-30 03:52:31.460508 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-30 03:52:31.460519 | orchestrator | Friday 30 January 2026 03:52:25 +0000 (0:00:01.158) 0:09:26.232 ******** 2026-01-30 03:52:31.460530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:52:31.460541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:52:31.460552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:52:31.460563 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:31.460574 | orchestrator | 2026-01-30 03:52:31.460585 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-30 03:52:31.460596 | orchestrator | Friday 30 January 2026 03:52:26 +0000 (0:00:00.643) 0:09:26.876 ******** 2026-01-30 03:52:31.460606 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.460617 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.460628 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.460638 | orchestrator | 2026-01-30 03:52:31.460649 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-30 03:52:31.460660 | orchestrator | 2026-01-30 03:52:31.460671 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 03:52:31.460682 | orchestrator | Friday 30 January 2026 03:52:26 +0000 (0:00:00.524) 0:09:27.400 ******** 2026-01-30 03:52:31.460693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:31.460706 | orchestrator | 2026-01-30 03:52:31.460716 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 03:52:31.460727 | orchestrator | Friday 30 January 2026 03:52:27 +0000 (0:00:00.701) 0:09:28.101 ******** 2026-01-30 03:52:31.460738 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:31.460749 | orchestrator | 2026-01-30 03:52:31.460759 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 03:52:31.460770 | orchestrator | Friday 30 January 2026 03:52:27 +0000 (0:00:00.493) 0:09:28.595 ******** 2026-01-30 03:52:31.460781 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:31.460802 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:31.460813 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:31.460824 | orchestrator | 2026-01-30 03:52:31.460835 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 03:52:31.460846 | orchestrator | Friday 30 January 2026 03:52:28 +0000 (0:00:00.531) 0:09:29.127 ******** 2026-01-30 03:52:31.460856 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.460867 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.460878 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.460889 | orchestrator | 2026-01-30 03:52:31.460900 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 03:52:31.460918 | orchestrator | Friday 30 January 2026 03:52:28 +0000 (0:00:00.691) 0:09:29.818 ******** 2026-01-30 03:52:31.460929 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.460940 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.460950 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.460961 | orchestrator | 2026-01-30 03:52:31.460972 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 03:52:31.460983 | orchestrator | Friday 30 January 2026 03:52:29 +0000 (0:00:00.698) 0:09:30.517 ******** 2026-01-30 03:52:31.460993 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:31.461004 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:31.461015 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:31.461025 | orchestrator | 2026-01-30 03:52:31.461036 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 03:52:31.461071 | orchestrator | Friday 30 January 2026 03:52:30 +0000 (0:00:00.704) 0:09:31.221 ******** 2026-01-30 03:52:31.461084 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:31.461095 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:31.461106 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:31.461116 | orchestrator | 2026-01-30 03:52:31.461127 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 03:52:31.461138 | orchestrator | Friday 30 January 2026 03:52:30 +0000 (0:00:00.555) 0:09:31.777 ******** 2026-01-30 03:52:31.461149 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:31.461160 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:31.461171 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:31.461182 | orchestrator | 2026-01-30 03:52:31.461193 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 03:52:31.461204 | orchestrator | Friday 30 January 2026 03:52:31 +0000 (0:00:00.317) 0:09:32.094 ******** 2026-01-30 03:52:31.461225 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979083 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979175 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979183 | orchestrator | 2026-01-30 03:52:51.979191 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 03:52:51.979199 | orchestrator | Friday 30 January 2026 03:52:31 +0000 (0:00:00.309) 0:09:32.404 ******** 2026-01-30 03:52:51.979206 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979214 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979221 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979227 | orchestrator | 2026-01-30 03:52:51.979234 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 03:52:51.979241 | orchestrator | Friday 30 January 2026 03:52:32 +0000 (0:00:00.931) 0:09:33.336 ******** 2026-01-30 03:52:51.979249 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979253 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979257 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979261 | orchestrator | 2026-01-30 03:52:51.979265 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 03:52:51.979269 | orchestrator | Friday 30 January 2026 03:52:33 +0000 (0:00:00.718) 0:09:34.054 ******** 2026-01-30 03:52:51.979273 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979278 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979281 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979303 | orchestrator | 2026-01-30 03:52:51.979307 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 03:52:51.979311 | orchestrator | Friday 30 January 2026 03:52:33 +0000 (0:00:00.308) 0:09:34.362 ******** 2026-01-30 03:52:51.979315 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979320 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979324 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979328 | orchestrator | 2026-01-30 03:52:51.979332 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 03:52:51.979336 | orchestrator | Friday 30 January 2026 03:52:33 +0000 (0:00:00.300) 0:09:34.663 ******** 2026-01-30 03:52:51.979340 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979344 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979348 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979351 | orchestrator | 2026-01-30 03:52:51.979355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 03:52:51.979359 | orchestrator | Friday 30 January 2026 03:52:34 +0000 (0:00:00.539) 0:09:35.203 ******** 2026-01-30 03:52:51.979363 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979367 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979370 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979374 | orchestrator | 2026-01-30 03:52:51.979378 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 03:52:51.979381 | orchestrator | Friday 30 January 2026 03:52:34 +0000 (0:00:00.339) 0:09:35.542 ******** 2026-01-30 03:52:51.979385 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979389 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979393 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979397 | orchestrator | 2026-01-30 03:52:51.979400 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 03:52:51.979404 | orchestrator | Friday 30 January 2026 03:52:35 +0000 (0:00:00.324) 0:09:35.867 ******** 2026-01-30 03:52:51.979408 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979412 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979416 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979419 | orchestrator | 2026-01-30 03:52:51.979423 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 03:52:51.979427 | orchestrator | Friday 30 January 2026 03:52:35 +0000 (0:00:00.322) 0:09:36.190 ******** 2026-01-30 03:52:51.979431 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979435 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979438 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979442 | orchestrator | 2026-01-30 03:52:51.979446 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 03:52:51.979450 | orchestrator | Friday 30 January 2026 03:52:35 +0000 (0:00:00.538) 0:09:36.729 ******** 2026-01-30 03:52:51.979454 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979457 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979461 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979465 | orchestrator | 2026-01-30 03:52:51.979469 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 03:52:51.979473 | orchestrator | Friday 30 January 2026 03:52:36 +0000 (0:00:00.324) 0:09:37.053 ******** 2026-01-30 03:52:51.979487 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979491 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979495 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979498 | orchestrator | 2026-01-30 03:52:51.979502 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 03:52:51.979506 | orchestrator | Friday 30 January 2026 03:52:36 +0000 (0:00:00.335) 0:09:37.388 ******** 2026-01-30 03:52:51.979510 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:52:51.979513 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:52:51.979517 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:52:51.979521 | orchestrator | 2026-01-30 03:52:51.979525 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-30 03:52:51.979533 | orchestrator | Friday 30 January 2026 03:52:37 +0000 (0:00:00.728) 0:09:38.117 ******** 2026-01-30 03:52:51.979538 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:51.979542 | orchestrator | 2026-01-30 03:52:51.979546 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 03:52:51.979550 | orchestrator | Friday 30 January 2026 03:52:37 +0000 (0:00:00.555) 0:09:38.673 ******** 2026-01-30 03:52:51.979554 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979558 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:52:51.979562 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:52:51.979566 | orchestrator | 2026-01-30 03:52:51.979570 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 03:52:51.979574 | orchestrator | Friday 30 January 2026 03:52:40 +0000 (0:00:02.350) 0:09:41.024 ******** 2026-01-30 03:52:51.979587 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 03:52:51.979592 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 03:52:51.979596 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:51.979600 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 03:52:51.979604 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 03:52:51.979607 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:51.979611 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 03:52:51.979615 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 03:52:51.979619 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:51.979623 | orchestrator | 2026-01-30 03:52:51.979626 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-30 03:52:51.979630 | orchestrator | Friday 30 January 2026 03:52:41 +0000 (0:00:01.180) 0:09:42.204 ******** 2026-01-30 03:52:51.979634 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:52:51.979638 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:52:51.979642 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:52:51.979645 | orchestrator | 2026-01-30 03:52:51.979650 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-30 03:52:51.979654 | orchestrator | Friday 30 January 2026 03:52:41 +0000 (0:00:00.427) 0:09:42.632 ******** 2026-01-30 03:52:51.979658 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:52:51.979663 | orchestrator | 2026-01-30 03:52:51.979667 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-30 03:52:51.979672 | orchestrator | Friday 30 January 2026 03:52:42 +0000 (0:00:00.483) 0:09:43.115 ******** 2026-01-30 03:52:51.979677 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 03:52:51.979683 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 03:52:51.979687 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 03:52:51.979692 | orchestrator | 2026-01-30 03:52:51.979696 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-30 03:52:51.979700 | orchestrator | Friday 30 January 2026 03:52:43 +0000 (0:00:00.756) 0:09:43.872 ******** 2026-01-30 03:52:51.979704 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979709 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 03:52:51.979713 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979720 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 03:52:51.979725 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979729 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 03:52:51.979733 | orchestrator | 2026-01-30 03:52:51.979737 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 03:52:51.979742 | orchestrator | Friday 30 January 2026 03:52:47 +0000 (0:00:04.579) 0:09:48.451 ******** 2026-01-30 03:52:51.979746 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979750 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:52:51.979755 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979762 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:52:51.979766 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:52:51.979770 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:52:51.979775 | orchestrator | 2026-01-30 03:52:51.979779 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 03:52:51.979783 | orchestrator | Friday 30 January 2026 03:52:49 +0000 (0:00:02.270) 0:09:50.721 ******** 2026-01-30 03:52:51.979787 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 03:52:51.979792 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:52:51.979796 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 03:52:51.979800 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:52:51.979805 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 03:52:51.979809 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:52:51.979813 | orchestrator | 2026-01-30 03:52:51.979818 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-30 03:52:51.979822 | orchestrator | Friday 30 January 2026 03:52:51 +0000 (0:00:01.169) 0:09:51.891 ******** 2026-01-30 03:52:51.979826 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-30 03:52:51.979831 | orchestrator | 2026-01-30 03:52:51.979835 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-30 03:52:51.979839 | orchestrator | Friday 30 January 2026 03:52:51 +0000 (0:00:00.205) 0:09:52.096 ******** 2026-01-30 03:52:51.979843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:52:51.979851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609128 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609136 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609145 | orchestrator | 2026-01-30 03:53:35.609153 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-30 03:53:35.609162 | orchestrator | Friday 30 January 2026 03:52:51 +0000 (0:00:00.698) 0:09:52.794 ******** 2026-01-30 03:53:35.609169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 03:53:35.609223 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609230 | orchestrator | 2026-01-30 03:53:35.609237 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-30 03:53:35.609243 | orchestrator | Friday 30 January 2026 03:52:52 +0000 (0:00:00.844) 0:09:53.639 ******** 2026-01-30 03:53:35.609302 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 03:53:35.609312 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 03:53:35.609319 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 03:53:35.609326 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 03:53:35.609333 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 03:53:35.609339 | orchestrator | 2026-01-30 03:53:35.609346 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-30 03:53:35.609353 | orchestrator | Friday 30 January 2026 03:53:23 +0000 (0:00:31.017) 0:10:24.656 ******** 2026-01-30 03:53:35.609360 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609366 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:35.609373 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:35.609380 | orchestrator | 2026-01-30 03:53:35.609386 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-30 03:53:35.609393 | orchestrator | Friday 30 January 2026 03:53:24 +0000 (0:00:00.304) 0:10:24.961 ******** 2026-01-30 03:53:35.609411 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609418 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:35.609425 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:35.609431 | orchestrator | 2026-01-30 03:53:35.609438 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-30 03:53:35.609445 | orchestrator | Friday 30 January 2026 03:53:24 +0000 (0:00:00.317) 0:10:25.278 ******** 2026-01-30 03:53:35.609452 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:53:35.609459 | orchestrator | 2026-01-30 03:53:35.609466 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-30 03:53:35.609472 | orchestrator | Friday 30 January 2026 03:53:25 +0000 (0:00:00.714) 0:10:25.993 ******** 2026-01-30 03:53:35.609479 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:53:35.609486 | orchestrator | 2026-01-30 03:53:35.609493 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-30 03:53:35.609499 | orchestrator | Friday 30 January 2026 03:53:25 +0000 (0:00:00.537) 0:10:26.531 ******** 2026-01-30 03:53:35.609507 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:53:35.609514 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:53:35.609520 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:53:35.609527 | orchestrator | 2026-01-30 03:53:35.609534 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-30 03:53:35.609546 | orchestrator | Friday 30 January 2026 03:53:26 +0000 (0:00:01.251) 0:10:27.782 ******** 2026-01-30 03:53:35.609553 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:53:35.609560 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:53:35.609567 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:53:35.609575 | orchestrator | 2026-01-30 03:53:35.609583 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-30 03:53:35.609605 | orchestrator | Friday 30 January 2026 03:53:28 +0000 (0:00:01.387) 0:10:29.170 ******** 2026-01-30 03:53:35.609613 | orchestrator | changed: [testbed-node-3] 2026-01-30 03:53:35.609620 | orchestrator | changed: [testbed-node-5] 2026-01-30 03:53:35.609627 | orchestrator | changed: [testbed-node-4] 2026-01-30 03:53:35.609635 | orchestrator | 2026-01-30 03:53:35.609643 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-30 03:53:35.609650 | orchestrator | Friday 30 January 2026 03:53:30 +0000 (0:00:01.758) 0:10:30.928 ******** 2026-01-30 03:53:35.609658 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 03:53:35.609666 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 03:53:35.609674 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 03:53:35.609681 | orchestrator | 2026-01-30 03:53:35.609689 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-30 03:53:35.609696 | orchestrator | Friday 30 January 2026 03:53:32 +0000 (0:00:02.684) 0:10:33.613 ******** 2026-01-30 03:53:35.609704 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609711 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:35.609719 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:35.609726 | orchestrator | 2026-01-30 03:53:35.609733 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-30 03:53:35.609741 | orchestrator | Friday 30 January 2026 03:53:33 +0000 (0:00:00.336) 0:10:33.950 ******** 2026-01-30 03:53:35.609749 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:53:35.609757 | orchestrator | 2026-01-30 03:53:35.609763 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-30 03:53:35.609770 | orchestrator | Friday 30 January 2026 03:53:33 +0000 (0:00:00.716) 0:10:34.666 ******** 2026-01-30 03:53:35.609777 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:35.609784 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:35.609791 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:35.609797 | orchestrator | 2026-01-30 03:53:35.609804 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-30 03:53:35.609811 | orchestrator | Friday 30 January 2026 03:53:34 +0000 (0:00:00.312) 0:10:34.978 ******** 2026-01-30 03:53:35.609817 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609824 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:35.609831 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:35.609837 | orchestrator | 2026-01-30 03:53:35.609844 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-30 03:53:35.609851 | orchestrator | Friday 30 January 2026 03:53:34 +0000 (0:00:00.344) 0:10:35.323 ******** 2026-01-30 03:53:35.609857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:53:35.609864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:53:35.609870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:53:35.609877 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:35.609884 | orchestrator | 2026-01-30 03:53:35.609890 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-30 03:53:35.609897 | orchestrator | Friday 30 January 2026 03:53:35 +0000 (0:00:00.837) 0:10:36.160 ******** 2026-01-30 03:53:35.609909 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:35.609916 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:35.609923 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:35.609929 | orchestrator | 2026-01-30 03:53:35.609936 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:53:35.609943 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-30 03:53:35.609955 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-30 03:53:35.609962 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-30 03:53:35.609969 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-30 03:53:35.609975 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-30 03:53:35.609982 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-30 03:53:35.609989 | orchestrator | 2026-01-30 03:53:35.609995 | orchestrator | 2026-01-30 03:53:35.610005 | orchestrator | 2026-01-30 03:53:35.610070 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:53:35.610082 | orchestrator | Friday 30 January 2026 03:53:35 +0000 (0:00:00.249) 0:10:36.409 ******** 2026-01-30 03:53:35.610093 | orchestrator | =============================================================================== 2026-01-30 03:53:35.610103 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.17s 2026-01-30 03:53:35.610114 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 46.48s 2026-01-30 03:53:35.610126 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.02s 2026-01-30 03:53:35.610145 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.47s 2026-01-30 03:53:36.099456 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.97s 2026-01-30 03:53:36.099584 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.60s 2026-01-30 03:53:36.099607 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.48s 2026-01-30 03:53:36.099625 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.91s 2026-01-30 03:53:36.099642 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.50s 2026-01-30 03:53:36.099661 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.48s 2026-01-30 03:53:36.099680 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.61s 2026-01-30 03:53:36.099695 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.45s 2026-01-30 03:53:36.099711 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.93s 2026-01-30 03:53:36.099728 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.58s 2026-01-30 03:53:36.099746 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.13s 2026-01-30 03:53:36.099764 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.76s 2026-01-30 03:53:36.099782 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.69s 2026-01-30 03:53:36.099799 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.64s 2026-01-30 03:53:36.099816 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.26s 2026-01-30 03:53:36.099833 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.25s 2026-01-30 03:53:38.353456 | orchestrator | 2026-01-30 03:53:38 | INFO  | Task 1e777ae6-ed9e-4cdf-9dec-8da4810a7d83 (ceph-pools) was prepared for execution. 2026-01-30 03:53:38.353560 | orchestrator | 2026-01-30 03:53:38 | INFO  | It takes a moment until task 1e777ae6-ed9e-4cdf-9dec-8da4810a7d83 (ceph-pools) has been started and output is visible here. 2026-01-30 03:53:50.764542 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 03:53:50.764674 | orchestrator | 2.16.14 2026-01-30 03:53:50.764698 | orchestrator | 2026-01-30 03:53:50.764719 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-30 03:53:50.764740 | orchestrator | 2026-01-30 03:53:50.764757 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 03:53:50.764777 | orchestrator | Friday 30 January 2026 03:53:42 +0000 (0:00:00.523) 0:00:00.523 ******** 2026-01-30 03:53:50.764791 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:53:50.764803 | orchestrator | 2026-01-30 03:53:50.764815 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 03:53:50.764826 | orchestrator | Friday 30 January 2026 03:53:43 +0000 (0:00:00.445) 0:00:00.968 ******** 2026-01-30 03:53:50.764837 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.764848 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.764859 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.764870 | orchestrator | 2026-01-30 03:53:50.764881 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 03:53:50.764892 | orchestrator | Friday 30 January 2026 03:53:43 +0000 (0:00:00.565) 0:00:01.534 ******** 2026-01-30 03:53:50.764902 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.764913 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.764924 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.764935 | orchestrator | 2026-01-30 03:53:50.764946 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 03:53:50.764956 | orchestrator | Friday 30 January 2026 03:53:43 +0000 (0:00:00.272) 0:00:01.806 ******** 2026-01-30 03:53:50.764985 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.764997 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765007 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765018 | orchestrator | 2026-01-30 03:53:50.765029 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 03:53:50.765040 | orchestrator | Friday 30 January 2026 03:53:44 +0000 (0:00:00.724) 0:00:02.531 ******** 2026-01-30 03:53:50.765053 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.765066 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765078 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765090 | orchestrator | 2026-01-30 03:53:50.765102 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 03:53:50.765115 | orchestrator | Friday 30 January 2026 03:53:44 +0000 (0:00:00.270) 0:00:02.802 ******** 2026-01-30 03:53:50.765128 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.765140 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765152 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765164 | orchestrator | 2026-01-30 03:53:50.765175 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 03:53:50.765186 | orchestrator | Friday 30 January 2026 03:53:45 +0000 (0:00:00.265) 0:00:03.067 ******** 2026-01-30 03:53:50.765197 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.765208 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765218 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765229 | orchestrator | 2026-01-30 03:53:50.765240 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 03:53:50.765251 | orchestrator | Friday 30 January 2026 03:53:45 +0000 (0:00:00.269) 0:00:03.337 ******** 2026-01-30 03:53:50.765262 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:50.765274 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:50.765285 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:50.765354 | orchestrator | 2026-01-30 03:53:50.765367 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 03:53:50.765378 | orchestrator | Friday 30 January 2026 03:53:45 +0000 (0:00:00.331) 0:00:03.668 ******** 2026-01-30 03:53:50.765389 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.765400 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765411 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765421 | orchestrator | 2026-01-30 03:53:50.765432 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 03:53:50.765443 | orchestrator | Friday 30 January 2026 03:53:45 +0000 (0:00:00.233) 0:00:03.902 ******** 2026-01-30 03:53:50.765454 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:53:50.765465 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:53:50.765476 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:53:50.765487 | orchestrator | 2026-01-30 03:53:50.765498 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 03:53:50.765509 | orchestrator | Friday 30 January 2026 03:53:46 +0000 (0:00:00.579) 0:00:04.481 ******** 2026-01-30 03:53:50.765519 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:50.765530 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:50.765541 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:50.765551 | orchestrator | 2026-01-30 03:53:50.765562 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 03:53:50.765573 | orchestrator | Friday 30 January 2026 03:53:46 +0000 (0:00:00.396) 0:00:04.877 ******** 2026-01-30 03:53:50.765584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:53:50.765595 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:53:50.765606 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:53:50.765617 | orchestrator | 2026-01-30 03:53:50.765627 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 03:53:50.765639 | orchestrator | Friday 30 January 2026 03:53:48 +0000 (0:00:02.029) 0:00:06.907 ******** 2026-01-30 03:53:50.765650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 03:53:50.765662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 03:53:50.765672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 03:53:50.765683 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:50.765694 | orchestrator | 2026-01-30 03:53:50.765724 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 03:53:50.765736 | orchestrator | Friday 30 January 2026 03:53:49 +0000 (0:00:00.516) 0:00:07.423 ******** 2026-01-30 03:53:50.765749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765787 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:50.765797 | orchestrator | 2026-01-30 03:53:50.765808 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 03:53:50.765819 | orchestrator | Friday 30 January 2026 03:53:50 +0000 (0:00:00.942) 0:00:08.365 ******** 2026-01-30 03:53:50.765860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:50.765897 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:50.765908 | orchestrator | 2026-01-30 03:53:50.765920 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 03:53:50.765931 | orchestrator | Friday 30 January 2026 03:53:50 +0000 (0:00:00.156) 0:00:08.522 ******** 2026-01-30 03:53:50.765944 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9b4b4ef35663', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 03:53:47.676967', 'end': '2026-01-30 03:53:47.746251', 'delta': '0:00:00.069284', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9b4b4ef35663'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 03:53:50.765959 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b97e426bfe4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 03:53:48.252331', 'end': '2026-01-30 03:53:48.305324', 'delta': '0:00:00.052993', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b97e426bfe4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 03:53:50.766008 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1f4acb9ff46e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 03:53:48.774819', 'end': '2026-01-30 03:53:48.813355', 'delta': '0:00:00.038536', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f4acb9ff46e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 03:53:57.321517 | orchestrator | 2026-01-30 03:53:57.321590 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 03:53:57.321616 | orchestrator | Friday 30 January 2026 03:53:50 +0000 (0:00:00.196) 0:00:08.719 ******** 2026-01-30 03:53:57.321621 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:57.321626 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:53:57.321630 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:53:57.321634 | orchestrator | 2026-01-30 03:53:57.321638 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 03:53:57.321642 | orchestrator | Friday 30 January 2026 03:53:51 +0000 (0:00:00.430) 0:00:09.150 ******** 2026-01-30 03:53:57.321657 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-30 03:53:57.321662 | orchestrator | 2026-01-30 03:53:57.321666 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 03:53:57.321670 | orchestrator | Friday 30 January 2026 03:53:52 +0000 (0:00:01.688) 0:00:10.838 ******** 2026-01-30 03:53:57.321674 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321678 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321682 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321686 | orchestrator | 2026-01-30 03:53:57.321690 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 03:53:57.321694 | orchestrator | Friday 30 January 2026 03:53:53 +0000 (0:00:00.289) 0:00:11.128 ******** 2026-01-30 03:53:57.321698 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321701 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321705 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321709 | orchestrator | 2026-01-30 03:53:57.321713 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 03:53:57.321717 | orchestrator | Friday 30 January 2026 03:53:53 +0000 (0:00:00.774) 0:00:11.902 ******** 2026-01-30 03:53:57.321721 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321725 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321729 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321733 | orchestrator | 2026-01-30 03:53:57.321737 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 03:53:57.321741 | orchestrator | Friday 30 January 2026 03:53:54 +0000 (0:00:00.295) 0:00:12.198 ******** 2026-01-30 03:53:57.321745 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:53:57.321749 | orchestrator | 2026-01-30 03:53:57.321752 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 03:53:57.321756 | orchestrator | Friday 30 January 2026 03:53:54 +0000 (0:00:00.136) 0:00:12.334 ******** 2026-01-30 03:53:57.321760 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321764 | orchestrator | 2026-01-30 03:53:57.321768 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 03:53:57.321772 | orchestrator | Friday 30 January 2026 03:53:54 +0000 (0:00:00.231) 0:00:12.565 ******** 2026-01-30 03:53:57.321776 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321780 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321783 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321787 | orchestrator | 2026-01-30 03:53:57.321791 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 03:53:57.321795 | orchestrator | Friday 30 January 2026 03:53:54 +0000 (0:00:00.267) 0:00:12.833 ******** 2026-01-30 03:53:57.321799 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321803 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321807 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321810 | orchestrator | 2026-01-30 03:53:57.321814 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 03:53:57.321818 | orchestrator | Friday 30 January 2026 03:53:55 +0000 (0:00:00.305) 0:00:13.138 ******** 2026-01-30 03:53:57.321822 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321826 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321830 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321834 | orchestrator | 2026-01-30 03:53:57.321841 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 03:53:57.321845 | orchestrator | Friday 30 January 2026 03:53:55 +0000 (0:00:00.525) 0:00:13.664 ******** 2026-01-30 03:53:57.321849 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321853 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321857 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321861 | orchestrator | 2026-01-30 03:53:57.321865 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 03:53:57.321869 | orchestrator | Friday 30 January 2026 03:53:56 +0000 (0:00:00.315) 0:00:13.980 ******** 2026-01-30 03:53:57.321872 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321876 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321880 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321884 | orchestrator | 2026-01-30 03:53:57.321888 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 03:53:57.321892 | orchestrator | Friday 30 January 2026 03:53:56 +0000 (0:00:00.324) 0:00:14.305 ******** 2026-01-30 03:53:57.321895 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321899 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321903 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321907 | orchestrator | 2026-01-30 03:53:57.321911 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 03:53:57.321915 | orchestrator | Friday 30 January 2026 03:53:56 +0000 (0:00:00.466) 0:00:14.771 ******** 2026-01-30 03:53:57.321919 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.321923 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.321927 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.321931 | orchestrator | 2026-01-30 03:53:57.321935 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 03:53:57.321938 | orchestrator | Friday 30 January 2026 03:53:57 +0000 (0:00:00.309) 0:00:15.081 ******** 2026-01-30 03:53:57.321955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.321997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.322001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.322010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373705 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.373815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.373828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373851 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.373864 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373873 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.373887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.373906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373914 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.373934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.504908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505077 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.505106 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.505153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.505178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.505201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.505221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.505242 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:57.505262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505283 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.505370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789412 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789441 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-30 03:53:57.789482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.789500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.789509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.789518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.789526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-30 03:53:57.789535 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:53:57.789545 | orchestrator | 2026-01-30 03:53:57.789553 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 03:53:57.789562 | orchestrator | Friday 30 January 2026 03:53:57 +0000 (0:00:00.553) 0:00:15.635 ******** 2026-01-30 03:53:57.789579 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880116 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880135 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.880227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996483 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996695 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996705 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:53:57.996718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996747 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:57.996766 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098296 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.098698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238113 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:53:58.238121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238129 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238151 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238177 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238215 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:53:58.238250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:54:07.865964 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:54:07.866179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-30-02-37-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-30 03:54:07.866230 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.866246 | orchestrator | 2026-01-30 03:54:07.866259 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 03:54:07.866271 | orchestrator | Friday 30 January 2026 03:53:58 +0000 (0:00:00.563) 0:00:16.198 ******** 2026-01-30 03:54:07.866282 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:54:07.866294 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:54:07.866305 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:54:07.866320 | orchestrator | 2026-01-30 03:54:07.866338 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 03:54:07.866386 | orchestrator | Friday 30 January 2026 03:53:59 +0000 (0:00:00.846) 0:00:17.045 ******** 2026-01-30 03:54:07.866404 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:54:07.866422 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:54:07.866439 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:54:07.866456 | orchestrator | 2026-01-30 03:54:07.866474 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 03:54:07.866491 | orchestrator | Friday 30 January 2026 03:53:59 +0000 (0:00:00.294) 0:00:17.339 ******** 2026-01-30 03:54:07.866507 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:54:07.866544 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:54:07.866562 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:54:07.866580 | orchestrator | 2026-01-30 03:54:07.866600 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 03:54:07.866619 | orchestrator | Friday 30 January 2026 03:54:00 +0000 (0:00:00.651) 0:00:17.991 ******** 2026-01-30 03:54:07.866637 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.866657 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.866675 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.866695 | orchestrator | 2026-01-30 03:54:07.866712 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 03:54:07.866730 | orchestrator | Friday 30 January 2026 03:54:00 +0000 (0:00:00.307) 0:00:18.298 ******** 2026-01-30 03:54:07.866747 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.866764 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.866781 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.866799 | orchestrator | 2026-01-30 03:54:07.866818 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 03:54:07.866837 | orchestrator | Friday 30 January 2026 03:54:00 +0000 (0:00:00.661) 0:00:18.960 ******** 2026-01-30 03:54:07.866857 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.866873 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.866892 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.866908 | orchestrator | 2026-01-30 03:54:07.866925 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 03:54:07.866943 | orchestrator | Friday 30 January 2026 03:54:01 +0000 (0:00:00.327) 0:00:19.288 ******** 2026-01-30 03:54:07.866961 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 03:54:07.866980 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 03:54:07.866999 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 03:54:07.867017 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 03:54:07.867036 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 03:54:07.867071 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 03:54:07.867090 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 03:54:07.867108 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 03:54:07.867127 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 03:54:07.867146 | orchestrator | 2026-01-30 03:54:07.867165 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 03:54:07.867183 | orchestrator | Friday 30 January 2026 03:54:02 +0000 (0:00:01.002) 0:00:20.290 ******** 2026-01-30 03:54:07.867229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 03:54:07.867249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 03:54:07.867263 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 03:54:07.867274 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.867285 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 03:54:07.867296 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 03:54:07.867306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 03:54:07.867317 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.867328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 03:54:07.867374 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 03:54:07.867387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 03:54:07.867398 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.867409 | orchestrator | 2026-01-30 03:54:07.867420 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 03:54:07.867431 | orchestrator | Friday 30 January 2026 03:54:02 +0000 (0:00:00.337) 0:00:20.627 ******** 2026-01-30 03:54:07.867443 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 03:54:07.867455 | orchestrator | 2026-01-30 03:54:07.867466 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 03:54:07.867485 | orchestrator | Friday 30 January 2026 03:54:03 +0000 (0:00:00.699) 0:00:21.327 ******** 2026-01-30 03:54:07.867503 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.867520 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.867538 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.867555 | orchestrator | 2026-01-30 03:54:07.867574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 03:54:07.867593 | orchestrator | Friday 30 January 2026 03:54:03 +0000 (0:00:00.310) 0:00:21.638 ******** 2026-01-30 03:54:07.867611 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.867629 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.867648 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.867667 | orchestrator | 2026-01-30 03:54:07.867685 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 03:54:07.867704 | orchestrator | Friday 30 January 2026 03:54:03 +0000 (0:00:00.303) 0:00:21.941 ******** 2026-01-30 03:54:07.867723 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.867741 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:54:07.867760 | orchestrator | skipping: [testbed-node-5] 2026-01-30 03:54:07.867781 | orchestrator | 2026-01-30 03:54:07.867844 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 03:54:07.867865 | orchestrator | Friday 30 January 2026 03:54:04 +0000 (0:00:00.512) 0:00:22.454 ******** 2026-01-30 03:54:07.867882 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:54:07.867900 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:54:07.867916 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:54:07.867933 | orchestrator | 2026-01-30 03:54:07.867951 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 03:54:07.867970 | orchestrator | Friday 30 January 2026 03:54:04 +0000 (0:00:00.417) 0:00:22.871 ******** 2026-01-30 03:54:07.868008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:54:07.868037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:54:07.868056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:54:07.868074 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.868091 | orchestrator | 2026-01-30 03:54:07.868110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 03:54:07.868129 | orchestrator | Friday 30 January 2026 03:54:05 +0000 (0:00:00.380) 0:00:23.251 ******** 2026-01-30 03:54:07.868149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:54:07.868169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:54:07.868186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:54:07.868205 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.868217 | orchestrator | 2026-01-30 03:54:07.868228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 03:54:07.868239 | orchestrator | Friday 30 January 2026 03:54:05 +0000 (0:00:00.369) 0:00:23.621 ******** 2026-01-30 03:54:07.868250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 03:54:07.868261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 03:54:07.868271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 03:54:07.868282 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:54:07.868293 | orchestrator | 2026-01-30 03:54:07.868303 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 03:54:07.868314 | orchestrator | Friday 30 January 2026 03:54:06 +0000 (0:00:00.370) 0:00:23.991 ******** 2026-01-30 03:54:07.868325 | orchestrator | ok: [testbed-node-3] 2026-01-30 03:54:07.868335 | orchestrator | ok: [testbed-node-4] 2026-01-30 03:54:07.868441 | orchestrator | ok: [testbed-node-5] 2026-01-30 03:54:07.868453 | orchestrator | 2026-01-30 03:54:07.868464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 03:54:07.868475 | orchestrator | Friday 30 January 2026 03:54:06 +0000 (0:00:00.310) 0:00:24.301 ******** 2026-01-30 03:54:07.868486 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 03:54:07.868501 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 03:54:07.868519 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 03:54:07.868535 | orchestrator | 2026-01-30 03:54:07.868552 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 03:54:07.868567 | orchestrator | Friday 30 January 2026 03:54:07 +0000 (0:00:00.725) 0:00:25.026 ******** 2026-01-30 03:54:07.868584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:54:07.868618 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:55:49.296580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:55:49.296692 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 03:55:49.296701 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 03:55:49.296707 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 03:55:49.296712 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 03:55:49.296717 | orchestrator | 2026-01-30 03:55:49.296723 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 03:55:49.296729 | orchestrator | Friday 30 January 2026 03:54:07 +0000 (0:00:00.796) 0:00:25.823 ******** 2026-01-30 03:55:49.296733 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 03:55:49.296738 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 03:55:49.296743 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 03:55:49.296764 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 03:55:49.296769 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 03:55:49.296774 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 03:55:49.296779 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 03:55:49.296783 | orchestrator | 2026-01-30 03:55:49.296788 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-30 03:55:49.296792 | orchestrator | Friday 30 January 2026 03:54:09 +0000 (0:00:01.599) 0:00:27.423 ******** 2026-01-30 03:55:49.296797 | orchestrator | skipping: [testbed-node-3] 2026-01-30 03:55:49.296803 | orchestrator | skipping: [testbed-node-4] 2026-01-30 03:55:49.296808 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-30 03:55:49.296813 | orchestrator | 2026-01-30 03:55:49.296818 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-30 03:55:49.296823 | orchestrator | Friday 30 January 2026 03:54:09 +0000 (0:00:00.418) 0:00:27.842 ******** 2026-01-30 03:55:49.296829 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:55:49.296835 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:55:49.296851 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:55:49.296856 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:55:49.296860 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-30 03:55:49.296865 | orchestrator | 2026-01-30 03:55:49.296870 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-30 03:55:49.296874 | orchestrator | Friday 30 January 2026 03:54:55 +0000 (0:00:46.084) 0:01:13.927 ******** 2026-01-30 03:55:49.296879 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296884 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296889 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296893 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296898 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296902 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296907 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-30 03:55:49.296912 | orchestrator | 2026-01-30 03:55:49.296917 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-30 03:55:49.296921 | orchestrator | Friday 30 January 2026 03:55:19 +0000 (0:00:23.686) 0:01:37.613 ******** 2026-01-30 03:55:49.296941 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296946 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296951 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296955 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296960 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296964 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296969 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 03:55:49.296974 | orchestrator | 2026-01-30 03:55:49.296979 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-30 03:55:49.296983 | orchestrator | Friday 30 January 2026 03:55:31 +0000 (0:00:11.847) 0:01:49.461 ******** 2026-01-30 03:55:49.296988 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.296992 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.296997 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.297006 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.297011 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297016 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.297020 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.297025 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297029 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.297034 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.297038 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297043 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.297048 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.297052 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297057 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 03:55:49.297061 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 03:55:49.297066 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 03:55:49.297071 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-30 03:55:49.297075 | orchestrator | 2026-01-30 03:55:49.297083 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:55:49.297088 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-30 03:55:49.297094 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-30 03:55:49.297100 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-30 03:55:49.297105 | orchestrator | 2026-01-30 03:55:49.297109 | orchestrator | 2026-01-30 03:55:49.297114 | orchestrator | 2026-01-30 03:55:49.297119 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:55:49.297127 | orchestrator | Friday 30 January 2026 03:55:49 +0000 (0:00:17.768) 0:02:07.229 ******** 2026-01-30 03:55:49.297131 | orchestrator | =============================================================================== 2026-01-30 03:55:49.297136 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.08s 2026-01-30 03:55:49.297141 | orchestrator | generate keys ---------------------------------------------------------- 23.69s 2026-01-30 03:55:49.297146 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.77s 2026-01-30 03:55:49.297150 | orchestrator | get keys from monitors ------------------------------------------------- 11.85s 2026-01-30 03:55:49.297155 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2026-01-30 03:55:49.297159 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2026-01-30 03:55:49.297164 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.60s 2026-01-30 03:55:49.297169 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2026-01-30 03:55:49.297173 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.94s 2026-01-30 03:55:49.297178 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.85s 2026-01-30 03:55:49.297182 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.80s 2026-01-30 03:55:49.297187 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.77s 2026-01-30 03:55:49.297192 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.73s 2026-01-30 03:55:49.297199 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.72s 2026-01-30 03:55:49.680048 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-01-30 03:55:49.680148 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2026-01-30 03:55:49.680162 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-01-30 03:55:49.680174 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.58s 2026-01-30 03:55:49.680185 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.57s 2026-01-30 03:55:49.680197 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.56s 2026-01-30 03:55:52.166691 | orchestrator | 2026-01-30 03:55:52 | INFO  | Task e5921a51-9b60-48c3-aaf2-9ef2a81b9811 (copy-ceph-keys) was prepared for execution. 2026-01-30 03:55:52.166779 | orchestrator | 2026-01-30 03:55:52 | INFO  | It takes a moment until task e5921a51-9b60-48c3-aaf2-9ef2a81b9811 (copy-ceph-keys) has been started and output is visible here. 2026-01-30 03:56:28.620743 | orchestrator | 2026-01-30 03:56:28.620853 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-30 03:56:28.620868 | orchestrator | 2026-01-30 03:56:28.620879 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-30 03:56:28.620889 | orchestrator | Friday 30 January 2026 03:55:56 +0000 (0:00:00.142) 0:00:00.142 ******** 2026-01-30 03:56:28.620900 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-30 03:56:28.620911 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.620921 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.620931 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 03:56:28.620941 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.620951 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-30 03:56:28.620960 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-30 03:56:28.620992 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-30 03:56:28.621003 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-30 03:56:28.621013 | orchestrator | 2026-01-30 03:56:28.621023 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-30 03:56:28.621032 | orchestrator | Friday 30 January 2026 03:56:01 +0000 (0:00:04.669) 0:00:04.811 ******** 2026-01-30 03:56:28.621056 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-30 03:56:28.621066 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621076 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 03:56:28.621095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621105 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-30 03:56:28.621115 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-30 03:56:28.621124 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-30 03:56:28.621134 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-30 03:56:28.621144 | orchestrator | 2026-01-30 03:56:28.621153 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-30 03:56:28.621163 | orchestrator | Friday 30 January 2026 03:56:05 +0000 (0:00:04.259) 0:00:09.071 ******** 2026-01-30 03:56:28.621174 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-30 03:56:28.621184 | orchestrator | 2026-01-30 03:56:28.621194 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-30 03:56:28.621204 | orchestrator | Friday 30 January 2026 03:56:06 +0000 (0:00:00.899) 0:00:09.971 ******** 2026-01-30 03:56:28.621213 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-30 03:56:28.621225 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621240 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621256 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 03:56:28.621272 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621289 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-30 03:56:28.621307 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-30 03:56:28.621323 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-30 03:56:28.621341 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-30 03:56:28.621358 | orchestrator | 2026-01-30 03:56:28.621371 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-30 03:56:28.621383 | orchestrator | Friday 30 January 2026 03:56:18 +0000 (0:00:12.676) 0:00:22.647 ******** 2026-01-30 03:56:28.621394 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-30 03:56:28.621405 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-30 03:56:28.621417 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-30 03:56:28.621428 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-30 03:56:28.621466 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-30 03:56:28.621479 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-30 03:56:28.621490 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-30 03:56:28.621501 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-30 03:56:28.621559 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-30 03:56:28.621572 | orchestrator | 2026-01-30 03:56:28.621584 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-30 03:56:28.621595 | orchestrator | Friday 30 January 2026 03:56:21 +0000 (0:00:02.949) 0:00:25.597 ******** 2026-01-30 03:56:28.621607 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-30 03:56:28.621617 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621627 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621637 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 03:56:28.621646 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-30 03:56:28.621656 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-30 03:56:28.621666 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-30 03:56:28.621675 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-30 03:56:28.621685 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-30 03:56:28.621720 | orchestrator | 2026-01-30 03:56:28.621736 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:56:28.621747 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:56:28.621758 | orchestrator | 2026-01-30 03:56:28.621768 | orchestrator | 2026-01-30 03:56:28.621778 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:56:28.621788 | orchestrator | Friday 30 January 2026 03:56:28 +0000 (0:00:06.459) 0:00:32.057 ******** 2026-01-30 03:56:28.621797 | orchestrator | =============================================================================== 2026-01-30 03:56:28.621807 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.68s 2026-01-30 03:56:28.621816 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.46s 2026-01-30 03:56:28.621826 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.67s 2026-01-30 03:56:28.621836 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.26s 2026-01-30 03:56:28.621845 | orchestrator | Check if target directories exist --------------------------------------- 2.95s 2026-01-30 03:56:28.621855 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2026-01-30 03:56:40.907857 | orchestrator | 2026-01-30 03:56:40 | INFO  | Task 87ec3ea6-8bf9-4624-a3cd-481ed67103b2 (cephclient) was prepared for execution. 2026-01-30 03:56:40.907974 | orchestrator | 2026-01-30 03:56:40 | INFO  | It takes a moment until task 87ec3ea6-8bf9-4624-a3cd-481ed67103b2 (cephclient) has been started and output is visible here. 2026-01-30 03:57:37.857039 | orchestrator | 2026-01-30 03:57:37.857152 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-30 03:57:37.857166 | orchestrator | 2026-01-30 03:57:37.857175 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-30 03:57:37.857184 | orchestrator | Friday 30 January 2026 03:56:45 +0000 (0:00:00.230) 0:00:00.230 ******** 2026-01-30 03:57:37.857192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-30 03:57:37.857221 | orchestrator | 2026-01-30 03:57:37.857230 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-30 03:57:37.857237 | orchestrator | Friday 30 January 2026 03:56:45 +0000 (0:00:00.240) 0:00:00.470 ******** 2026-01-30 03:57:37.857246 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-30 03:57:37.857254 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-30 03:57:37.857262 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-30 03:57:37.857270 | orchestrator | 2026-01-30 03:57:37.857277 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-30 03:57:37.857285 | orchestrator | Friday 30 January 2026 03:56:46 +0000 (0:00:01.202) 0:00:01.673 ******** 2026-01-30 03:57:37.857293 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-30 03:57:37.857301 | orchestrator | 2026-01-30 03:57:37.857308 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-30 03:57:37.857315 | orchestrator | Friday 30 January 2026 03:56:47 +0000 (0:00:01.354) 0:00:03.028 ******** 2026-01-30 03:57:37.857323 | orchestrator | changed: [testbed-manager] 2026-01-30 03:57:37.857331 | orchestrator | 2026-01-30 03:57:37.857338 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-30 03:57:37.857346 | orchestrator | Friday 30 January 2026 03:56:48 +0000 (0:00:00.826) 0:00:03.854 ******** 2026-01-30 03:57:37.857353 | orchestrator | changed: [testbed-manager] 2026-01-30 03:57:37.857360 | orchestrator | 2026-01-30 03:57:37.857368 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-30 03:57:37.857375 | orchestrator | Friday 30 January 2026 03:56:49 +0000 (0:00:00.884) 0:00:04.739 ******** 2026-01-30 03:57:37.857383 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-30 03:57:37.857390 | orchestrator | ok: [testbed-manager] 2026-01-30 03:57:37.857398 | orchestrator | 2026-01-30 03:57:37.857405 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-30 03:57:37.857412 | orchestrator | Friday 30 January 2026 03:57:29 +0000 (0:00:39.481) 0:00:44.221 ******** 2026-01-30 03:57:37.857420 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-30 03:57:37.857427 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-30 03:57:37.857435 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-30 03:57:37.857442 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-30 03:57:37.857452 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-30 03:57:37.857461 | orchestrator | 2026-01-30 03:57:37.857470 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-30 03:57:37.857478 | orchestrator | Friday 30 January 2026 03:57:32 +0000 (0:00:03.711) 0:00:47.932 ******** 2026-01-30 03:57:37.857486 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-30 03:57:37.857495 | orchestrator | 2026-01-30 03:57:37.857503 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-30 03:57:37.857512 | orchestrator | Friday 30 January 2026 03:57:33 +0000 (0:00:00.401) 0:00:48.334 ******** 2026-01-30 03:57:37.857520 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:57:37.857528 | orchestrator | 2026-01-30 03:57:37.857536 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-30 03:57:37.857545 | orchestrator | Friday 30 January 2026 03:57:33 +0000 (0:00:00.116) 0:00:48.451 ******** 2026-01-30 03:57:37.857553 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:57:37.857561 | orchestrator | 2026-01-30 03:57:37.857581 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-30 03:57:37.857591 | orchestrator | Friday 30 January 2026 03:57:33 +0000 (0:00:00.424) 0:00:48.875 ******** 2026-01-30 03:57:37.857599 | orchestrator | changed: [testbed-manager] 2026-01-30 03:57:37.857615 | orchestrator | 2026-01-30 03:57:37.857624 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-30 03:57:37.857632 | orchestrator | Friday 30 January 2026 03:57:34 +0000 (0:00:01.208) 0:00:50.084 ******** 2026-01-30 03:57:37.857641 | orchestrator | changed: [testbed-manager] 2026-01-30 03:57:37.857649 | orchestrator | 2026-01-30 03:57:37.857657 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-30 03:57:37.857666 | orchestrator | Friday 30 January 2026 03:57:35 +0000 (0:00:00.670) 0:00:50.754 ******** 2026-01-30 03:57:37.857674 | orchestrator | changed: [testbed-manager] 2026-01-30 03:57:37.857682 | orchestrator | 2026-01-30 03:57:37.857690 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-30 03:57:37.857697 | orchestrator | Friday 30 January 2026 03:57:36 +0000 (0:00:00.559) 0:00:51.314 ******** 2026-01-30 03:57:37.857704 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-30 03:57:37.857712 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-30 03:57:37.857719 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-30 03:57:37.857727 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-30 03:57:37.857735 | orchestrator | 2026-01-30 03:57:37.857742 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:57:37.857750 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 03:57:37.857758 | orchestrator | 2026-01-30 03:57:37.857766 | orchestrator | 2026-01-30 03:57:37.857789 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:57:37.857797 | orchestrator | Friday 30 January 2026 03:57:37 +0000 (0:00:01.426) 0:00:52.740 ******** 2026-01-30 03:57:37.857805 | orchestrator | =============================================================================== 2026-01-30 03:57:37.857812 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.48s 2026-01-30 03:57:37.857819 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.71s 2026-01-30 03:57:37.857827 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.43s 2026-01-30 03:57:37.857859 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.35s 2026-01-30 03:57:37.857867 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.21s 2026-01-30 03:57:37.857875 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2026-01-30 03:57:37.857882 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.88s 2026-01-30 03:57:37.857889 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.83s 2026-01-30 03:57:37.857896 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.67s 2026-01-30 03:57:37.857904 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.56s 2026-01-30 03:57:37.857911 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.42s 2026-01-30 03:57:37.857918 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2026-01-30 03:57:37.857925 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-01-30 03:57:37.857933 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-01-30 03:57:40.919692 | orchestrator | 2026-01-30 03:57:40 | INFO  | Task be0316f7-1985-4892-a92f-58cfb7725392 (ceph-bootstrap-dashboard) was prepared for execution. 2026-01-30 03:57:40.919766 | orchestrator | 2026-01-30 03:57:40 | INFO  | It takes a moment until task be0316f7-1985-4892-a92f-58cfb7725392 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-01-30 03:59:05.440279 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 03:59:05.440407 | orchestrator | 2.16.14 2026-01-30 03:59:05.440427 | orchestrator | 2026-01-30 03:59:05.440441 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-30 03:59:05.440481 | orchestrator | 2026-01-30 03:59:05.440494 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-30 03:59:05.440505 | orchestrator | Friday 30 January 2026 03:57:44 +0000 (0:00:00.194) 0:00:00.194 ******** 2026-01-30 03:59:05.440517 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440529 | orchestrator | 2026-01-30 03:59:05.440540 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-30 03:59:05.440551 | orchestrator | Friday 30 January 2026 03:57:46 +0000 (0:00:02.077) 0:00:02.272 ******** 2026-01-30 03:59:05.440562 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440573 | orchestrator | 2026-01-30 03:59:05.440585 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-30 03:59:05.440596 | orchestrator | Friday 30 January 2026 03:57:47 +0000 (0:00:00.939) 0:00:03.212 ******** 2026-01-30 03:59:05.440607 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440618 | orchestrator | 2026-01-30 03:59:05.440629 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-30 03:59:05.440640 | orchestrator | Friday 30 January 2026 03:57:48 +0000 (0:00:01.008) 0:00:04.221 ******** 2026-01-30 03:59:05.440650 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440662 | orchestrator | 2026-01-30 03:59:05.440672 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-30 03:59:05.440683 | orchestrator | Friday 30 January 2026 03:57:49 +0000 (0:00:01.083) 0:00:05.304 ******** 2026-01-30 03:59:05.440694 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440705 | orchestrator | 2026-01-30 03:59:05.440730 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-30 03:59:05.440742 | orchestrator | Friday 30 January 2026 03:57:50 +0000 (0:00:00.995) 0:00:06.300 ******** 2026-01-30 03:59:05.440753 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440764 | orchestrator | 2026-01-30 03:59:05.440778 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-30 03:59:05.440792 | orchestrator | Friday 30 January 2026 03:57:51 +0000 (0:00:01.003) 0:00:07.303 ******** 2026-01-30 03:59:05.440805 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440818 | orchestrator | 2026-01-30 03:59:05.440830 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-30 03:59:05.440847 | orchestrator | Friday 30 January 2026 03:57:52 +0000 (0:00:01.101) 0:00:08.405 ******** 2026-01-30 03:59:05.440866 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440882 | orchestrator | 2026-01-30 03:59:05.440899 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-30 03:59:05.440916 | orchestrator | Friday 30 January 2026 03:57:53 +0000 (0:00:01.133) 0:00:09.538 ******** 2026-01-30 03:59:05.440963 | orchestrator | changed: [testbed-manager] 2026-01-30 03:59:05.440983 | orchestrator | 2026-01-30 03:59:05.441030 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-30 03:59:05.441047 | orchestrator | Friday 30 January 2026 03:58:40 +0000 (0:00:46.587) 0:00:56.126 ******** 2026-01-30 03:59:05.441064 | orchestrator | skipping: [testbed-manager] 2026-01-30 03:59:05.441082 | orchestrator | 2026-01-30 03:59:05.441101 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-30 03:59:05.441119 | orchestrator | 2026-01-30 03:59:05.441136 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-30 03:59:05.441156 | orchestrator | Friday 30 January 2026 03:58:40 +0000 (0:00:00.207) 0:00:56.333 ******** 2026-01-30 03:59:05.441176 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:59:05.441194 | orchestrator | 2026-01-30 03:59:05.441210 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-30 03:59:05.441221 | orchestrator | 2026-01-30 03:59:05.441231 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-30 03:59:05.441242 | orchestrator | Friday 30 January 2026 03:58:52 +0000 (0:00:11.828) 0:01:08.162 ******** 2026-01-30 03:59:05.441267 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:59:05.441278 | orchestrator | 2026-01-30 03:59:05.441289 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-30 03:59:05.441300 | orchestrator | 2026-01-30 03:59:05.441311 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-30 03:59:05.441323 | orchestrator | Friday 30 January 2026 03:59:03 +0000 (0:00:11.361) 0:01:19.523 ******** 2026-01-30 03:59:05.441334 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:59:05.441344 | orchestrator | 2026-01-30 03:59:05.441355 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 03:59:05.441369 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 03:59:05.441390 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:59:05.441408 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:59:05.441424 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 03:59:05.441439 | orchestrator | 2026-01-30 03:59:05.441455 | orchestrator | 2026-01-30 03:59:05.441471 | orchestrator | 2026-01-30 03:59:05.441487 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 03:59:05.441506 | orchestrator | Friday 30 January 2026 03:59:05 +0000 (0:00:01.312) 0:01:20.836 ******** 2026-01-30 03:59:05.441526 | orchestrator | =============================================================================== 2026-01-30 03:59:05.441545 | orchestrator | Create admin user ------------------------------------------------------ 46.59s 2026-01-30 03:59:05.441583 | orchestrator | Restart ceph manager service ------------------------------------------- 24.50s 2026-01-30 03:59:05.441595 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.08s 2026-01-30 03:59:05.441606 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.13s 2026-01-30 03:59:05.441617 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.10s 2026-01-30 03:59:05.441627 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.08s 2026-01-30 03:59:05.441638 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.01s 2026-01-30 03:59:05.441649 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.00s 2026-01-30 03:59:05.441660 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2026-01-30 03:59:05.441671 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2026-01-30 03:59:05.441681 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-01-30 03:59:05.698527 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-01-30 03:59:07.629063 | orchestrator | 2026-01-30 03:59:07 | INFO  | Task a85bbc4f-c5fd-40c5-8e86-2d3d06db719b (keystone) was prepared for execution. 2026-01-30 03:59:07.629146 | orchestrator | 2026-01-30 03:59:07 | INFO  | It takes a moment until task a85bbc4f-c5fd-40c5-8e86-2d3d06db719b (keystone) has been started and output is visible here. 2026-01-30 03:59:14.625932 | orchestrator | 2026-01-30 03:59:14.626210 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 03:59:14.626235 | orchestrator | 2026-01-30 03:59:14.626264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 03:59:14.626277 | orchestrator | Friday 30 January 2026 03:59:11 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-30 03:59:14.626288 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:59:14.626301 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:59:14.626312 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:59:14.626323 | orchestrator | 2026-01-30 03:59:14.626361 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 03:59:14.626373 | orchestrator | Friday 30 January 2026 03:59:11 +0000 (0:00:00.293) 0:00:00.539 ******** 2026-01-30 03:59:14.626384 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-30 03:59:14.626396 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-30 03:59:14.626406 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-30 03:59:14.626417 | orchestrator | 2026-01-30 03:59:14.626428 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-30 03:59:14.626439 | orchestrator | 2026-01-30 03:59:14.626450 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 03:59:14.626465 | orchestrator | Friday 30 January 2026 03:59:12 +0000 (0:00:00.403) 0:00:00.942 ******** 2026-01-30 03:59:14.626485 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:59:14.626509 | orchestrator | 2026-01-30 03:59:14.626538 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-30 03:59:14.626558 | orchestrator | Friday 30 January 2026 03:59:12 +0000 (0:00:00.555) 0:00:01.497 ******** 2026-01-30 03:59:14.626584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:14.626611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:14.626672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:14.626709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:14.626787 | orchestrator | 2026-01-30 03:59:14.626799 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-30 03:59:14.626818 | orchestrator | Friday 30 January 2026 03:59:14 +0000 (0:00:01.661) 0:00:03.159 ******** 2026-01-30 03:59:20.103925 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:20.104083 | orchestrator | 2026-01-30 03:59:20.104127 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-30 03:59:20.104146 | orchestrator | Friday 30 January 2026 03:59:14 +0000 (0:00:00.262) 0:00:03.422 ******** 2026-01-30 03:59:20.104162 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:20.104179 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:20.104196 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:20.104212 | orchestrator | 2026-01-30 03:59:20.104229 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-30 03:59:20.104246 | orchestrator | Friday 30 January 2026 03:59:15 +0000 (0:00:00.301) 0:00:03.724 ******** 2026-01-30 03:59:20.104263 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:59:20.104279 | orchestrator | 2026-01-30 03:59:20.104295 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 03:59:20.104312 | orchestrator | Friday 30 January 2026 03:59:15 +0000 (0:00:00.775) 0:00:04.499 ******** 2026-01-30 03:59:20.104330 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 03:59:20.104345 | orchestrator | 2026-01-30 03:59:20.104360 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-30 03:59:20.104373 | orchestrator | Friday 30 January 2026 03:59:16 +0000 (0:00:00.500) 0:00:05.000 ******** 2026-01-30 03:59:20.104394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:20.104417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:20.104436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:20.104511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:20.104593 | orchestrator | 2026-01-30 03:59:20.104605 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-30 03:59:20.104616 | orchestrator | Friday 30 January 2026 03:59:19 +0000 (0:00:03.081) 0:00:08.081 ******** 2026-01-30 03:59:20.104636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:20.863139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:20.863290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:20.863324 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:20.863344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:20.863377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:20.863409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:20.863430 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:20.863461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:20.863474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:20.863491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:20.863522 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:20.863543 | orchestrator | 2026-01-30 03:59:20.863561 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-30 03:59:20.863574 | orchestrator | Friday 30 January 2026 03:59:20 +0000 (0:00:00.564) 0:00:08.646 ******** 2026-01-30 03:59:20.863586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:20.863604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:20.863627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:23.996497 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:23.996586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:23.996608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:23.996642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:23.996655 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:23.996678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:23.996692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:23.996719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:23.996731 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:23.996742 | orchestrator | 2026-01-30 03:59:23.996754 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-30 03:59:23.996766 | orchestrator | Friday 30 January 2026 03:59:20 +0000 (0:00:00.757) 0:00:09.404 ******** 2026-01-30 03:59:23.996778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:23.996798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:23.996816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:23.996836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:28.539473 | orchestrator | 2026-01-30 03:59:28.539491 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-30 03:59:28.539508 | orchestrator | Friday 30 January 2026 03:59:23 +0000 (0:00:03.130) 0:00:12.534 ******** 2026-01-30 03:59:28.539582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:28.539617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:28.539635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:28.539652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:28.539676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:28.539706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:31.691157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:31.691235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:31.691242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 03:59:31.691248 | orchestrator | 2026-01-30 03:59:31.691254 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-30 03:59:31.691260 | orchestrator | Friday 30 January 2026 03:59:28 +0000 (0:00:04.537) 0:00:17.072 ******** 2026-01-30 03:59:31.691264 | orchestrator | changed: [testbed-node-0] 2026-01-30 03:59:31.691270 | orchestrator | changed: [testbed-node-1] 2026-01-30 03:59:31.691274 | orchestrator | changed: [testbed-node-2] 2026-01-30 03:59:31.691279 | orchestrator | 2026-01-30 03:59:31.691283 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-30 03:59:31.691288 | orchestrator | Friday 30 January 2026 03:59:29 +0000 (0:00:01.360) 0:00:18.432 ******** 2026-01-30 03:59:31.691292 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:31.691297 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:31.691301 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:31.691305 | orchestrator | 2026-01-30 03:59:31.691310 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-30 03:59:31.691314 | orchestrator | Friday 30 January 2026 03:59:30 +0000 (0:00:00.529) 0:00:18.961 ******** 2026-01-30 03:59:31.691318 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:31.691336 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:31.691341 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:31.691345 | orchestrator | 2026-01-30 03:59:31.691350 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-30 03:59:31.691354 | orchestrator | Friday 30 January 2026 03:59:30 +0000 (0:00:00.454) 0:00:19.416 ******** 2026-01-30 03:59:31.691359 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:31.691363 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:31.691367 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:31.691372 | orchestrator | 2026-01-30 03:59:31.691377 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-30 03:59:31.691381 | orchestrator | Friday 30 January 2026 03:59:31 +0000 (0:00:00.269) 0:00:19.685 ******** 2026-01-30 03:59:31.691420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:31.691430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:31.691439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:31.691446 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:31.691453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:31.691465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:31.691482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:31.691489 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:31.691501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-30 03:59:50.101011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 03:59:50.101193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 03:59:50.101258 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:50.101297 | orchestrator | 2026-01-30 03:59:50.101311 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 03:59:50.101324 | orchestrator | Friday 30 January 2026 03:59:31 +0000 (0:00:00.541) 0:00:20.227 ******** 2026-01-30 03:59:50.101335 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:50.101347 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:50.101358 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:50.101369 | orchestrator | 2026-01-30 03:59:50.101380 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-30 03:59:50.101392 | orchestrator | Friday 30 January 2026 03:59:31 +0000 (0:00:00.263) 0:00:20.490 ******** 2026-01-30 03:59:50.101403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-30 03:59:50.101441 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-30 03:59:50.101467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-30 03:59:50.101478 | orchestrator | 2026-01-30 03:59:50.101490 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-30 03:59:50.101501 | orchestrator | Friday 30 January 2026 03:59:33 +0000 (0:00:01.672) 0:00:22.162 ******** 2026-01-30 03:59:50.101512 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:59:50.101523 | orchestrator | 2026-01-30 03:59:50.101534 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-30 03:59:50.101548 | orchestrator | Friday 30 January 2026 03:59:34 +0000 (0:00:00.885) 0:00:23.048 ******** 2026-01-30 03:59:50.101560 | orchestrator | skipping: [testbed-node-0] 2026-01-30 03:59:50.101573 | orchestrator | skipping: [testbed-node-1] 2026-01-30 03:59:50.101585 | orchestrator | skipping: [testbed-node-2] 2026-01-30 03:59:50.101597 | orchestrator | 2026-01-30 03:59:50.101609 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-30 03:59:50.101622 | orchestrator | Friday 30 January 2026 03:59:35 +0000 (0:00:00.548) 0:00:23.596 ******** 2026-01-30 03:59:50.101634 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 03:59:50.101646 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 03:59:50.101659 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 03:59:50.101671 | orchestrator | 2026-01-30 03:59:50.101684 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-30 03:59:50.101697 | orchestrator | Friday 30 January 2026 03:59:36 +0000 (0:00:00.967) 0:00:24.564 ******** 2026-01-30 03:59:50.101708 | orchestrator | ok: [testbed-node-0] 2026-01-30 03:59:50.101720 | orchestrator | ok: [testbed-node-1] 2026-01-30 03:59:50.101813 | orchestrator | ok: [testbed-node-2] 2026-01-30 03:59:50.101827 | orchestrator | 2026-01-30 03:59:50.101838 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-30 03:59:50.101850 | orchestrator | Friday 30 January 2026 03:59:36 +0000 (0:00:00.421) 0:00:24.985 ******** 2026-01-30 03:59:50.101861 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-30 03:59:50.101872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-30 03:59:50.101883 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-30 03:59:50.101894 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-30 03:59:50.101905 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-30 03:59:50.101916 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-30 03:59:50.101927 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-30 03:59:50.101938 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-30 03:59:50.101968 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-30 03:59:50.101980 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-30 03:59:50.101991 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-30 03:59:50.102002 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-30 03:59:50.102013 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-30 03:59:50.102114 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-30 03:59:50.102126 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-30 03:59:50.102149 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 03:59:50.102161 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 03:59:50.102172 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 03:59:50.102183 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 03:59:50.102193 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 03:59:50.102204 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 03:59:50.102215 | orchestrator | 2026-01-30 03:59:50.102226 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-30 03:59:50.102237 | orchestrator | Friday 30 January 2026 03:59:45 +0000 (0:00:08.747) 0:00:33.733 ******** 2026-01-30 03:59:50.102248 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 03:59:50.102259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 03:59:50.102270 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 03:59:50.102281 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 03:59:50.102312 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 03:59:50.102324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 03:59:50.102334 | orchestrator | 2026-01-30 03:59:50.102352 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-30 03:59:50.102364 | orchestrator | Friday 30 January 2026 03:59:47 +0000 (0:00:02.541) 0:00:36.274 ******** 2026-01-30 03:59:50.102379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 03:59:50.102403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 04:01:38.977189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-30 04:01:38.977288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-30 04:01:38.977355 | orchestrator | 2026-01-30 04:01:38.977363 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 04:01:38.977371 | orchestrator | Friday 30 January 2026 03:59:50 +0000 (0:00:02.359) 0:00:38.634 ******** 2026-01-30 04:01:38.977377 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:01:38.977384 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:01:38.977389 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:01:38.977395 | orchestrator | 2026-01-30 04:01:38.977402 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-30 04:01:38.977408 | orchestrator | Friday 30 January 2026 03:59:50 +0000 (0:00:00.442) 0:00:39.077 ******** 2026-01-30 04:01:38.977413 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977418 | orchestrator | 2026-01-30 04:01:38.977424 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-30 04:01:38.977430 | orchestrator | Friday 30 January 2026 03:59:52 +0000 (0:00:02.429) 0:00:41.506 ******** 2026-01-30 04:01:38.977436 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977442 | orchestrator | 2026-01-30 04:01:38.977448 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-30 04:01:38.977454 | orchestrator | Friday 30 January 2026 03:59:55 +0000 (0:00:02.442) 0:00:43.949 ******** 2026-01-30 04:01:38.977460 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:01:38.977466 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:01:38.977473 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:01:38.977478 | orchestrator | 2026-01-30 04:01:38.977484 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-30 04:01:38.977490 | orchestrator | Friday 30 January 2026 03:59:56 +0000 (0:00:00.787) 0:00:44.736 ******** 2026-01-30 04:01:38.977497 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:01:38.977503 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:01:38.977512 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:01:38.977516 | orchestrator | 2026-01-30 04:01:38.977520 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-30 04:01:38.977525 | orchestrator | Friday 30 January 2026 03:59:56 +0000 (0:00:00.300) 0:00:45.037 ******** 2026-01-30 04:01:38.977529 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:01:38.977533 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:01:38.977537 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:01:38.977541 | orchestrator | 2026-01-30 04:01:38.977545 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-30 04:01:38.977548 | orchestrator | Friday 30 January 2026 03:59:56 +0000 (0:00:00.310) 0:00:45.347 ******** 2026-01-30 04:01:38.977552 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977556 | orchestrator | 2026-01-30 04:01:38.977560 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-30 04:01:38.977564 | orchestrator | Friday 30 January 2026 04:00:12 +0000 (0:00:16.037) 0:01:01.384 ******** 2026-01-30 04:01:38.977567 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977571 | orchestrator | 2026-01-30 04:01:38.977575 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-30 04:01:38.977584 | orchestrator | Friday 30 January 2026 04:00:23 +0000 (0:00:11.149) 0:01:12.533 ******** 2026-01-30 04:01:38.977588 | orchestrator | 2026-01-30 04:01:38.977592 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-30 04:01:38.977596 | orchestrator | Friday 30 January 2026 04:00:24 +0000 (0:00:00.067) 0:01:12.601 ******** 2026-01-30 04:01:38.977599 | orchestrator | 2026-01-30 04:01:38.977603 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-30 04:01:38.977607 | orchestrator | Friday 30 January 2026 04:00:24 +0000 (0:00:00.081) 0:01:12.682 ******** 2026-01-30 04:01:38.977611 | orchestrator | 2026-01-30 04:01:38.977615 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-30 04:01:38.977618 | orchestrator | Friday 30 January 2026 04:00:24 +0000 (0:00:00.069) 0:01:12.751 ******** 2026-01-30 04:01:38.977622 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977626 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:01:38.977630 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:01:38.977633 | orchestrator | 2026-01-30 04:01:38.977637 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-30 04:01:38.977641 | orchestrator | Friday 30 January 2026 04:01:16 +0000 (0:00:52.680) 0:02:05.432 ******** 2026-01-30 04:01:38.977645 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977649 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:01:38.977652 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:01:38.977656 | orchestrator | 2026-01-30 04:01:38.977660 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-30 04:01:38.977664 | orchestrator | Friday 30 January 2026 04:01:26 +0000 (0:00:09.678) 0:02:15.111 ******** 2026-01-30 04:01:38.977668 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:01:38.977671 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:01:38.977675 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:01:38.977679 | orchestrator | 2026-01-30 04:01:38.977683 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 04:01:38.977687 | orchestrator | Friday 30 January 2026 04:01:38 +0000 (0:00:11.832) 0:02:26.943 ******** 2026-01-30 04:01:38.977695 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:02:33.177250 | orchestrator | 2026-01-30 04:02:33.177409 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-30 04:02:33.177431 | orchestrator | Friday 30 January 2026 04:01:38 +0000 (0:00:00.574) 0:02:27.518 ******** 2026-01-30 04:02:33.177445 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:02:33.177460 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:02:33.177468 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:02:33.177476 | orchestrator | 2026-01-30 04:02:33.177485 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-30 04:02:33.177493 | orchestrator | Friday 30 January 2026 04:01:39 +0000 (0:00:00.722) 0:02:28.240 ******** 2026-01-30 04:02:33.177501 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:02:33.177510 | orchestrator | 2026-01-30 04:02:33.177518 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-30 04:02:33.177527 | orchestrator | Friday 30 January 2026 04:01:41 +0000 (0:00:02.138) 0:02:30.379 ******** 2026-01-30 04:02:33.177535 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-30 04:02:33.177543 | orchestrator | 2026-01-30 04:02:33.177551 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-30 04:02:33.177559 | orchestrator | Friday 30 January 2026 04:01:54 +0000 (0:00:12.455) 0:02:42.834 ******** 2026-01-30 04:02:33.177567 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-30 04:02:33.177575 | orchestrator | 2026-01-30 04:02:33.177583 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-30 04:02:33.177591 | orchestrator | Friday 30 January 2026 04:02:20 +0000 (0:00:26.489) 0:03:09.324 ******** 2026-01-30 04:02:33.177621 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-30 04:02:33.177631 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-30 04:02:33.177639 | orchestrator | 2026-01-30 04:02:33.177647 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-30 04:02:33.177655 | orchestrator | Friday 30 January 2026 04:02:28 +0000 (0:00:07.493) 0:03:16.817 ******** 2026-01-30 04:02:33.177663 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:02:33.177671 | orchestrator | 2026-01-30 04:02:33.177679 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-30 04:02:33.177687 | orchestrator | Friday 30 January 2026 04:02:28 +0000 (0:00:00.125) 0:03:16.943 ******** 2026-01-30 04:02:33.177695 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:02:33.177703 | orchestrator | 2026-01-30 04:02:33.177711 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-30 04:02:33.177732 | orchestrator | Friday 30 January 2026 04:02:28 +0000 (0:00:00.119) 0:03:17.062 ******** 2026-01-30 04:02:33.177740 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:02:33.177748 | orchestrator | 2026-01-30 04:02:33.177756 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-30 04:02:33.177764 | orchestrator | Friday 30 January 2026 04:02:28 +0000 (0:00:00.119) 0:03:17.182 ******** 2026-01-30 04:02:33.177772 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:02:33.177780 | orchestrator | 2026-01-30 04:02:33.177788 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-30 04:02:33.177796 | orchestrator | Friday 30 January 2026 04:02:28 +0000 (0:00:00.296) 0:03:17.479 ******** 2026-01-30 04:02:33.177804 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:02:33.177812 | orchestrator | 2026-01-30 04:02:33.177820 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-30 04:02:33.177828 | orchestrator | Friday 30 January 2026 04:02:32 +0000 (0:00:03.483) 0:03:20.963 ******** 2026-01-30 04:02:33.177836 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:02:33.177844 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:02:33.177852 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:02:33.177860 | orchestrator | 2026-01-30 04:02:33.177868 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:02:33.177877 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 04:02:33.177887 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:02:33.177894 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:02:33.177902 | orchestrator | 2026-01-30 04:02:33.177911 | orchestrator | 2026-01-30 04:02:33.177919 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:02:33.177927 | orchestrator | Friday 30 January 2026 04:02:32 +0000 (0:00:00.426) 0:03:21.390 ******** 2026-01-30 04:02:33.177935 | orchestrator | =============================================================================== 2026-01-30 04:02:33.177943 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 52.68s 2026-01-30 04:02:33.177950 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.49s 2026-01-30 04:02:33.177958 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.04s 2026-01-30 04:02:33.177966 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.46s 2026-01-30 04:02:33.177974 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.83s 2026-01-30 04:02:33.177982 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.15s 2026-01-30 04:02:33.177990 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.68s 2026-01-30 04:02:33.178004 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.75s 2026-01-30 04:02:33.178062 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.49s 2026-01-30 04:02:33.178087 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.54s 2026-01-30 04:02:33.178096 | orchestrator | keystone : Creating default user role ----------------------------------- 3.48s 2026-01-30 04:02:33.178104 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.13s 2026-01-30 04:02:33.178112 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.08s 2026-01-30 04:02:33.178120 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.54s 2026-01-30 04:02:33.178128 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.44s 2026-01-30 04:02:33.178136 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.43s 2026-01-30 04:02:33.178144 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.36s 2026-01-30 04:02:33.178152 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.14s 2026-01-30 04:02:33.178159 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.67s 2026-01-30 04:02:33.178167 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.66s 2026-01-30 04:02:35.387168 | orchestrator | 2026-01-30 04:02:35 | INFO  | Task 3fb26b5b-44ba-46be-b25c-8cf1992e8901 (placement) was prepared for execution. 2026-01-30 04:02:35.387266 | orchestrator | 2026-01-30 04:02:35 | INFO  | It takes a moment until task 3fb26b5b-44ba-46be-b25c-8cf1992e8901 (placement) has been started and output is visible here. 2026-01-30 04:03:11.374938 | orchestrator | 2026-01-30 04:03:11.375021 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:03:11.375028 | orchestrator | 2026-01-30 04:03:11.375033 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:03:11.375039 | orchestrator | Friday 30 January 2026 04:02:39 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-30 04:03:11.375044 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:03:11.375051 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:03:11.375056 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:03:11.375061 | orchestrator | 2026-01-30 04:03:11.375066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:03:11.375071 | orchestrator | Friday 30 January 2026 04:02:39 +0000 (0:00:00.311) 0:00:00.557 ******** 2026-01-30 04:03:11.375076 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-30 04:03:11.375092 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-30 04:03:11.375097 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-30 04:03:11.375102 | orchestrator | 2026-01-30 04:03:11.375107 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-30 04:03:11.375111 | orchestrator | 2026-01-30 04:03:11.375116 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-30 04:03:11.375121 | orchestrator | Friday 30 January 2026 04:02:40 +0000 (0:00:00.415) 0:00:00.973 ******** 2026-01-30 04:03:11.375126 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:03:11.375132 | orchestrator | 2026-01-30 04:03:11.375137 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-30 04:03:11.375141 | orchestrator | Friday 30 January 2026 04:02:40 +0000 (0:00:00.506) 0:00:01.480 ******** 2026-01-30 04:03:11.375146 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-30 04:03:11.375151 | orchestrator | 2026-01-30 04:03:11.375155 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-30 04:03:11.375160 | orchestrator | Friday 30 January 2026 04:02:44 +0000 (0:00:04.236) 0:00:05.717 ******** 2026-01-30 04:03:11.375180 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-30 04:03:11.375185 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-30 04:03:11.375190 | orchestrator | 2026-01-30 04:03:11.375194 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-30 04:03:11.375199 | orchestrator | Friday 30 January 2026 04:02:51 +0000 (0:00:07.131) 0:00:12.849 ******** 2026-01-30 04:03:11.375204 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-30 04:03:11.375209 | orchestrator | 2026-01-30 04:03:11.375213 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-30 04:03:11.375218 | orchestrator | Friday 30 January 2026 04:02:55 +0000 (0:00:03.834) 0:00:16.683 ******** 2026-01-30 04:03:11.375223 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:03:11.375227 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-30 04:03:11.375232 | orchestrator | 2026-01-30 04:03:11.375236 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-30 04:03:11.375241 | orchestrator | Friday 30 January 2026 04:03:00 +0000 (0:00:04.303) 0:00:20.986 ******** 2026-01-30 04:03:11.375246 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:03:11.375250 | orchestrator | 2026-01-30 04:03:11.375255 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-30 04:03:11.375260 | orchestrator | Friday 30 January 2026 04:03:03 +0000 (0:00:03.354) 0:00:24.341 ******** 2026-01-30 04:03:11.375264 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-30 04:03:11.375269 | orchestrator | 2026-01-30 04:03:11.375274 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-30 04:03:11.375278 | orchestrator | Friday 30 January 2026 04:03:07 +0000 (0:00:03.991) 0:00:28.332 ******** 2026-01-30 04:03:11.375283 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:11.375288 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:03:11.375292 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:03:11.375297 | orchestrator | 2026-01-30 04:03:11.375301 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-30 04:03:11.375306 | orchestrator | Friday 30 January 2026 04:03:07 +0000 (0:00:00.273) 0:00:28.606 ******** 2026-01-30 04:03:11.375313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:11.375334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:11.375470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:11.375476 | orchestrator | 2026-01-30 04:03:11.375482 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-30 04:03:11.375487 | orchestrator | Friday 30 January 2026 04:03:08 +0000 (0:00:01.015) 0:00:29.621 ******** 2026-01-30 04:03:11.375491 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:11.375496 | orchestrator | 2026-01-30 04:03:11.375501 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-30 04:03:11.375507 | orchestrator | Friday 30 January 2026 04:03:09 +0000 (0:00:00.291) 0:00:29.912 ******** 2026-01-30 04:03:11.375512 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:11.375517 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:03:11.375523 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:03:11.375528 | orchestrator | 2026-01-30 04:03:11.375533 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-30 04:03:11.375539 | orchestrator | Friday 30 January 2026 04:03:09 +0000 (0:00:00.289) 0:00:30.202 ******** 2026-01-30 04:03:11.375544 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:03:11.375549 | orchestrator | 2026-01-30 04:03:11.375555 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-30 04:03:11.375560 | orchestrator | Friday 30 January 2026 04:03:09 +0000 (0:00:00.550) 0:00:30.752 ******** 2026-01-30 04:03:11.375566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:11.375579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:14.143637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:14.143707 | orchestrator | 2026-01-30 04:03:14.143714 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-30 04:03:14.143719 | orchestrator | Friday 30 January 2026 04:03:11 +0000 (0:00:01.487) 0:00:32.240 ******** 2026-01-30 04:03:14.143725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143730 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:14.143736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143741 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:03:14.143745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143764 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:03:14.143768 | orchestrator | 2026-01-30 04:03:14.143772 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-30 04:03:14.143786 | orchestrator | Friday 30 January 2026 04:03:11 +0000 (0:00:00.512) 0:00:32.753 ******** 2026-01-30 04:03:14.143794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143798 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:14.143803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143807 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:03:14.143811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:14.143815 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:03:14.143819 | orchestrator | 2026-01-30 04:03:14.143823 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-30 04:03:14.143827 | orchestrator | Friday 30 January 2026 04:03:12 +0000 (0:00:00.698) 0:00:33.452 ******** 2026-01-30 04:03:14.143831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:14.143846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:20.755217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:20.755311 | orchestrator | 2026-01-30 04:03:20.755327 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-30 04:03:20.755339 | orchestrator | Friday 30 January 2026 04:03:14 +0000 (0:00:01.564) 0:00:35.016 ******** 2026-01-30 04:03:20.755380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:20.755393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:20.755438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:20.755450 | orchestrator | 2026-01-30 04:03:20.755460 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-30 04:03:20.755470 | orchestrator | Friday 30 January 2026 04:03:16 +0000 (0:00:02.192) 0:00:37.208 ******** 2026-01-30 04:03:20.755496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-30 04:03:20.755508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-30 04:03:20.755518 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-30 04:03:20.755528 | orchestrator | 2026-01-30 04:03:20.755537 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-30 04:03:20.755547 | orchestrator | Friday 30 January 2026 04:03:17 +0000 (0:00:01.392) 0:00:38.601 ******** 2026-01-30 04:03:20.755557 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:03:20.755569 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:03:20.755578 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:03:20.755588 | orchestrator | 2026-01-30 04:03:20.755598 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-30 04:03:20.755608 | orchestrator | Friday 30 January 2026 04:03:19 +0000 (0:00:01.290) 0:00:39.892 ******** 2026-01-30 04:03:20.755618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:20.755636 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:03:20.755647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:20.755657 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:03:20.755667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-30 04:03:20.755678 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:03:20.755687 | orchestrator | 2026-01-30 04:03:20.755702 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-30 04:03:20.755712 | orchestrator | Friday 30 January 2026 04:03:19 +0000 (0:00:00.690) 0:00:40.582 ******** 2026-01-30 04:03:20.755730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:45.861277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:45.861505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-30 04:03:45.861529 | orchestrator | 2026-01-30 04:03:45.861542 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-30 04:03:45.861555 | orchestrator | Friday 30 January 2026 04:03:20 +0000 (0:00:01.048) 0:00:41.631 ******** 2026-01-30 04:03:45.861567 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:03:45.861580 | orchestrator | 2026-01-30 04:03:45.861592 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-30 04:03:45.861603 | orchestrator | Friday 30 January 2026 04:03:22 +0000 (0:00:02.141) 0:00:43.772 ******** 2026-01-30 04:03:45.861614 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:03:45.861625 | orchestrator | 2026-01-30 04:03:45.861632 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-30 04:03:45.861638 | orchestrator | Friday 30 January 2026 04:03:25 +0000 (0:00:02.399) 0:00:46.172 ******** 2026-01-30 04:03:45.861645 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:03:45.861652 | orchestrator | 2026-01-30 04:03:45.861659 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-30 04:03:45.861665 | orchestrator | Friday 30 January 2026 04:03:39 +0000 (0:00:14.564) 0:01:00.736 ******** 2026-01-30 04:03:45.861672 | orchestrator | 2026-01-30 04:03:45.861679 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-30 04:03:45.861686 | orchestrator | Friday 30 January 2026 04:03:39 +0000 (0:00:00.068) 0:01:00.805 ******** 2026-01-30 04:03:45.861692 | orchestrator | 2026-01-30 04:03:45.861699 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-30 04:03:45.861706 | orchestrator | Friday 30 January 2026 04:03:39 +0000 (0:00:00.064) 0:01:00.870 ******** 2026-01-30 04:03:45.861712 | orchestrator | 2026-01-30 04:03:45.861719 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-30 04:03:45.861738 | orchestrator | Friday 30 January 2026 04:03:40 +0000 (0:00:00.066) 0:01:00.936 ******** 2026-01-30 04:03:45.861745 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:03:45.861751 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:03:45.861758 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:03:45.861765 | orchestrator | 2026-01-30 04:03:45.861771 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:03:45.861780 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:03:45.861789 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:03:45.861795 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:03:45.861802 | orchestrator | 2026-01-30 04:03:45.861810 | orchestrator | 2026-01-30 04:03:45.861818 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:03:45.861833 | orchestrator | Friday 30 January 2026 04:03:45 +0000 (0:00:05.496) 0:01:06.433 ******** 2026-01-30 04:03:45.861840 | orchestrator | =============================================================================== 2026-01-30 04:03:45.861848 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.56s 2026-01-30 04:03:45.861871 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.13s 2026-01-30 04:03:45.861879 | orchestrator | placement : Restart placement-api container ----------------------------- 5.50s 2026-01-30 04:03:45.861887 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.30s 2026-01-30 04:03:45.861895 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.24s 2026-01-30 04:03:45.861903 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.99s 2026-01-30 04:03:45.861911 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.83s 2026-01-30 04:03:45.861918 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.35s 2026-01-30 04:03:45.861927 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.40s 2026-01-30 04:03:45.861935 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.19s 2026-01-30 04:03:45.861942 | orchestrator | placement : Creating placement databases -------------------------------- 2.14s 2026-01-30 04:03:45.861950 | orchestrator | placement : Copying over config.json files for services ----------------- 1.56s 2026-01-30 04:03:45.861958 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.49s 2026-01-30 04:03:45.861966 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.39s 2026-01-30 04:03:45.861973 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.29s 2026-01-30 04:03:45.861981 | orchestrator | placement : Check placement containers ---------------------------------- 1.05s 2026-01-30 04:03:45.861989 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.02s 2026-01-30 04:03:45.861997 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.70s 2026-01-30 04:03:45.862005 | orchestrator | placement : Copying over existing policy file --------------------------- 0.69s 2026-01-30 04:03:45.862012 | orchestrator | placement : include_tasks ----------------------------------------------- 0.55s 2026-01-30 04:03:48.019498 | orchestrator | 2026-01-30 04:03:48 | INFO  | Task f2a09ca5-dda4-4c17-9220-dcd698b397a8 (neutron) was prepared for execution. 2026-01-30 04:03:48.019609 | orchestrator | 2026-01-30 04:03:48 | INFO  | It takes a moment until task f2a09ca5-dda4-4c17-9220-dcd698b397a8 (neutron) has been started and output is visible here. 2026-01-30 04:04:36.871015 | orchestrator | 2026-01-30 04:04:36.871155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:04:36.871186 | orchestrator | 2026-01-30 04:04:36.871206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:04:36.871226 | orchestrator | Friday 30 January 2026 04:03:51 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-01-30 04:04:36.871244 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:04:36.871260 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:04:36.871272 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:04:36.871283 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:04:36.871294 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:04:36.871305 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:04:36.871316 | orchestrator | 2026-01-30 04:04:36.871327 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:04:36.871338 | orchestrator | Friday 30 January 2026 04:03:52 +0000 (0:00:00.638) 0:00:00.896 ******** 2026-01-30 04:04:36.871349 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-30 04:04:36.871360 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-30 04:04:36.871371 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-30 04:04:36.871409 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-30 04:04:36.871421 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-30 04:04:36.871498 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-30 04:04:36.871510 | orchestrator | 2026-01-30 04:04:36.871527 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-30 04:04:36.871555 | orchestrator | 2026-01-30 04:04:36.871576 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-30 04:04:36.871615 | orchestrator | Friday 30 January 2026 04:03:53 +0000 (0:00:00.575) 0:00:01.472 ******** 2026-01-30 04:04:36.871636 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:04:36.871656 | orchestrator | 2026-01-30 04:04:36.871674 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-30 04:04:36.871694 | orchestrator | Friday 30 January 2026 04:03:54 +0000 (0:00:01.113) 0:00:02.586 ******** 2026-01-30 04:04:36.871714 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:04:36.871733 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:04:36.871746 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:04:36.871757 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:04:36.871769 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:04:36.871779 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:04:36.871790 | orchestrator | 2026-01-30 04:04:36.871801 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-30 04:04:36.871812 | orchestrator | Friday 30 January 2026 04:03:55 +0000 (0:00:01.261) 0:00:03.847 ******** 2026-01-30 04:04:36.871823 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:04:36.871834 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:04:36.871844 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:04:36.871855 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:04:36.871865 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:04:36.871876 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:04:36.871887 | orchestrator | 2026-01-30 04:04:36.871898 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-30 04:04:36.871909 | orchestrator | Friday 30 January 2026 04:03:56 +0000 (0:00:01.073) 0:00:04.921 ******** 2026-01-30 04:04:36.871920 | orchestrator | ok: [testbed-node-0] => { 2026-01-30 04:04:36.871932 | orchestrator |  "changed": false, 2026-01-30 04:04:36.871943 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.871954 | orchestrator | } 2026-01-30 04:04:36.871965 | orchestrator | ok: [testbed-node-1] => { 2026-01-30 04:04:36.871976 | orchestrator |  "changed": false, 2026-01-30 04:04:36.871987 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.871998 | orchestrator | } 2026-01-30 04:04:36.872008 | orchestrator | ok: [testbed-node-2] => { 2026-01-30 04:04:36.872019 | orchestrator |  "changed": false, 2026-01-30 04:04:36.872030 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.872040 | orchestrator | } 2026-01-30 04:04:36.872051 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 04:04:36.872062 | orchestrator |  "changed": false, 2026-01-30 04:04:36.872072 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.872083 | orchestrator | } 2026-01-30 04:04:36.872094 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 04:04:36.872105 | orchestrator |  "changed": false, 2026-01-30 04:04:36.872116 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.872127 | orchestrator | } 2026-01-30 04:04:36.872138 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 04:04:36.872149 | orchestrator |  "changed": false, 2026-01-30 04:04:36.872160 | orchestrator |  "msg": "All assertions passed" 2026-01-30 04:04:36.872171 | orchestrator | } 2026-01-30 04:04:36.872181 | orchestrator | 2026-01-30 04:04:36.872192 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-30 04:04:36.872203 | orchestrator | Friday 30 January 2026 04:03:57 +0000 (0:00:00.734) 0:00:05.655 ******** 2026-01-30 04:04:36.872214 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:36.872240 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:36.872251 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:36.872262 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:36.872272 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:36.872283 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:36.872294 | orchestrator | 2026-01-30 04:04:36.872305 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-30 04:04:36.872316 | orchestrator | Friday 30 January 2026 04:03:57 +0000 (0:00:00.599) 0:00:06.254 ******** 2026-01-30 04:04:36.872327 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-30 04:04:36.872338 | orchestrator | 2026-01-30 04:04:36.872348 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-30 04:04:36.872359 | orchestrator | Friday 30 January 2026 04:04:01 +0000 (0:00:04.015) 0:00:10.270 ******** 2026-01-30 04:04:36.872370 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-30 04:04:36.872382 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-30 04:04:36.872393 | orchestrator | 2026-01-30 04:04:36.872425 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-30 04:04:36.872466 | orchestrator | Friday 30 January 2026 04:04:08 +0000 (0:00:06.714) 0:00:16.984 ******** 2026-01-30 04:04:36.872477 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:04:36.872487 | orchestrator | 2026-01-30 04:04:36.872498 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-30 04:04:36.872509 | orchestrator | Friday 30 January 2026 04:04:12 +0000 (0:00:03.303) 0:00:20.288 ******** 2026-01-30 04:04:36.872520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:04:36.872531 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-30 04:04:36.872542 | orchestrator | 2026-01-30 04:04:36.872552 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-30 04:04:36.872563 | orchestrator | Friday 30 January 2026 04:04:16 +0000 (0:00:04.250) 0:00:24.538 ******** 2026-01-30 04:04:36.872574 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:04:36.872585 | orchestrator | 2026-01-30 04:04:36.872595 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-30 04:04:36.872606 | orchestrator | Friday 30 January 2026 04:04:19 +0000 (0:00:03.284) 0:00:27.822 ******** 2026-01-30 04:04:36.872617 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-30 04:04:36.872627 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-30 04:04:36.872638 | orchestrator | 2026-01-30 04:04:36.872649 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-30 04:04:36.872660 | orchestrator | Friday 30 January 2026 04:04:27 +0000 (0:00:08.090) 0:00:35.913 ******** 2026-01-30 04:04:36.872670 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:36.872681 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:36.872699 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:36.872710 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:36.872721 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:36.872731 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:36.872800 | orchestrator | 2026-01-30 04:04:36.872811 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-30 04:04:36.872822 | orchestrator | Friday 30 January 2026 04:04:28 +0000 (0:00:00.718) 0:00:36.632 ******** 2026-01-30 04:04:36.872833 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:36.872844 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:36.872855 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:36.872866 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:36.872876 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:36.872887 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:36.872898 | orchestrator | 2026-01-30 04:04:36.872930 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-30 04:04:36.872941 | orchestrator | Friday 30 January 2026 04:04:30 +0000 (0:00:02.064) 0:00:38.696 ******** 2026-01-30 04:04:36.872952 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:04:36.872963 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:04:36.872974 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:04:36.872985 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:04:36.872996 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:04:36.873007 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:04:36.873017 | orchestrator | 2026-01-30 04:04:36.873028 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-30 04:04:36.873039 | orchestrator | Friday 30 January 2026 04:04:32 +0000 (0:00:01.910) 0:00:40.606 ******** 2026-01-30 04:04:36.873050 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:36.873141 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:36.873156 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:36.873167 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:36.873178 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:36.873189 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:36.873200 | orchestrator | 2026-01-30 04:04:36.873211 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-30 04:04:36.873222 | orchestrator | Friday 30 January 2026 04:04:34 +0000 (0:00:02.052) 0:00:42.659 ******** 2026-01-30 04:04:36.873236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:36.873265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:42.211913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:42.212044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:42.212063 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:42.212076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:42.212090 | orchestrator | 2026-01-30 04:04:42.212105 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-30 04:04:42.212119 | orchestrator | Friday 30 January 2026 04:04:36 +0000 (0:00:02.485) 0:00:45.145 ******** 2026-01-30 04:04:42.212131 | orchestrator | [WARNING]: Skipped 2026-01-30 04:04:42.212145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-30 04:04:42.212159 | orchestrator | due to this access issue: 2026-01-30 04:04:42.212173 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-30 04:04:42.212186 | orchestrator | a directory 2026-01-30 04:04:42.212199 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:04:42.212211 | orchestrator | 2026-01-30 04:04:42.212223 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-30 04:04:42.212236 | orchestrator | Friday 30 January 2026 04:04:37 +0000 (0:00:00.759) 0:00:45.904 ******** 2026-01-30 04:04:42.212249 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:04:42.212263 | orchestrator | 2026-01-30 04:04:42.212275 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-30 04:04:42.212303 | orchestrator | Friday 30 January 2026 04:04:38 +0000 (0:00:01.197) 0:00:47.102 ******** 2026-01-30 04:04:42.212317 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:42.212344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:42.212357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:42.212370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:42.212391 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:46.684012 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:46.684111 | orchestrator | 2026-01-30 04:04:46.684124 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-30 04:04:46.684134 | orchestrator | Friday 30 January 2026 04:04:42 +0000 (0:00:03.382) 0:00:50.484 ******** 2026-01-30 04:04:46.684146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:46.684156 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:46.684166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:46.684175 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:46.684184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:46.684193 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:46.684239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:46.684249 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:46.684262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:46.684271 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:46.684280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:46.684289 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:46.684298 | orchestrator | 2026-01-30 04:04:46.684306 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-30 04:04:46.684315 | orchestrator | Friday 30 January 2026 04:04:44 +0000 (0:00:01.898) 0:00:52.382 ******** 2026-01-30 04:04:46.684324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:46.684333 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:46.684346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:51.541959 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:51.542140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:51.542161 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:51.542173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:51.542185 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:51.542196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:51.542206 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:51.542216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:51.542249 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:51.542260 | orchestrator | 2026-01-30 04:04:51.542271 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-30 04:04:51.542282 | orchestrator | Friday 30 January 2026 04:04:46 +0000 (0:00:02.576) 0:00:54.958 ******** 2026-01-30 04:04:51.542292 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:51.542301 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:51.542311 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:51.542321 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:51.542330 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:51.542340 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:51.542350 | orchestrator | 2026-01-30 04:04:51.542359 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-30 04:04:51.542369 | orchestrator | Friday 30 January 2026 04:04:48 +0000 (0:00:02.134) 0:00:57.093 ******** 2026-01-30 04:04:51.542379 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:51.542389 | orchestrator | 2026-01-30 04:04:51.542399 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-30 04:04:51.542424 | orchestrator | Friday 30 January 2026 04:04:48 +0000 (0:00:00.127) 0:00:57.220 ******** 2026-01-30 04:04:51.542435 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:51.542488 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:51.542501 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:51.542513 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:51.542524 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:51.542536 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:51.542547 | orchestrator | 2026-01-30 04:04:51.542560 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-30 04:04:51.542571 | orchestrator | Friday 30 January 2026 04:04:49 +0000 (0:00:00.552) 0:00:57.773 ******** 2026-01-30 04:04:51.542589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:51.542601 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:04:51.542613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:51.542626 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:04:51.542645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:51.542657 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:04:51.542669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:51.542681 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:04:51.542706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:04:58.640289 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:04:58.640399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:04:58.640420 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:04:58.640433 | orchestrator | 2026-01-30 04:04:58.640501 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-30 04:04:58.640561 | orchestrator | Friday 30 January 2026 04:04:51 +0000 (0:00:02.039) 0:00:59.813 ******** 2026-01-30 04:04:58.640585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:58.640640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:58.640654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:58.640702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:58.640715 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:58.640735 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:04:58.640746 | orchestrator | 2026-01-30 04:04:58.640758 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-30 04:04:58.640769 | orchestrator | Friday 30 January 2026 04:04:54 +0000 (0:00:02.866) 0:01:02.679 ******** 2026-01-30 04:04:58.640781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:58.640829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:04:58.640859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:02.924922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:05:02.925044 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:05:02.925060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:05:02.925073 | orchestrator | 2026-01-30 04:05:02.925085 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-30 04:05:02.925097 | orchestrator | Friday 30 January 2026 04:04:58 +0000 (0:00:04.236) 0:01:06.915 ******** 2026-01-30 04:05:02.925122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:02.925134 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:02.925163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:02.925182 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:02.925192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:02.925203 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:02.925213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:02.925223 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:02.925233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:02.925243 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:02.925259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:02.925269 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:02.925285 | orchestrator | 2026-01-30 04:05:02.925295 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-30 04:05:02.925305 | orchestrator | Friday 30 January 2026 04:05:00 +0000 (0:00:01.840) 0:01:08.755 ******** 2026-01-30 04:05:02.925315 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:02.925325 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:02.925335 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:02.925345 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:05:02.925355 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:05:02.925364 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:05:02.925375 | orchestrator | 2026-01-30 04:05:02.925392 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-30 04:05:02.925418 | orchestrator | Friday 30 January 2026 04:05:02 +0000 (0:00:02.440) 0:01:11.196 ******** 2026-01-30 04:05:18.955049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:18.955164 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:18.955195 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:18.955219 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.955247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:18.955301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:18.955315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:18.955327 | orchestrator | 2026-01-30 04:05:18.955341 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-30 04:05:18.955362 | orchestrator | Friday 30 January 2026 04:05:06 +0000 (0:00:03.221) 0:01:14.417 ******** 2026-01-30 04:05:18.955376 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.955387 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.955398 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.955409 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955420 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.955431 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955442 | orchestrator | 2026-01-30 04:05:18.955453 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-30 04:05:18.955464 | orchestrator | Friday 30 January 2026 04:05:08 +0000 (0:00:02.036) 0:01:16.454 ******** 2026-01-30 04:05:18.955509 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.955521 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.955532 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.955542 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955553 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.955564 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955575 | orchestrator | 2026-01-30 04:05:18.955589 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-30 04:05:18.955606 | orchestrator | Friday 30 January 2026 04:05:10 +0000 (0:00:02.007) 0:01:18.461 ******** 2026-01-30 04:05:18.955626 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.955645 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.955663 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.955681 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955701 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.955735 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955754 | orchestrator | 2026-01-30 04:05:18.955774 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-30 04:05:18.955795 | orchestrator | Friday 30 January 2026 04:05:11 +0000 (0:00:01.810) 0:01:20.271 ******** 2026-01-30 04:05:18.955814 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.955828 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.955841 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.955853 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955865 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955876 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.955887 | orchestrator | 2026-01-30 04:05:18.955898 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-30 04:05:18.955925 | orchestrator | Friday 30 January 2026 04:05:13 +0000 (0:00:01.679) 0:01:21.951 ******** 2026-01-30 04:05:18.955937 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.955948 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.955958 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.955969 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.955980 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.955991 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.956002 | orchestrator | 2026-01-30 04:05:18.956013 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-30 04:05:18.956024 | orchestrator | Friday 30 January 2026 04:05:15 +0000 (0:00:01.769) 0:01:23.721 ******** 2026-01-30 04:05:18.956035 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.956053 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.956064 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.956075 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.956086 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:18.956097 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:18.956107 | orchestrator | 2026-01-30 04:05:18.956118 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-30 04:05:18.956130 | orchestrator | Friday 30 January 2026 04:05:17 +0000 (0:00:01.594) 0:01:25.315 ******** 2026-01-30 04:05:18.956141 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:18.956152 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:18.956163 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:18.956174 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:18.956198 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:18.956209 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:18.956221 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:18.956232 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:18.956252 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:22.610896 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:22.611002 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-30 04:05:22.611018 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:22.611030 | orchestrator | 2026-01-30 04:05:22.611042 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-30 04:05:22.611054 | orchestrator | Friday 30 January 2026 04:05:18 +0000 (0:00:01.908) 0:01:27.224 ******** 2026-01-30 04:05:22.611073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:22.611129 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:22.611152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:22.611173 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:22.611193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:22.611212 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:22.611246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:22.611259 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:22.611290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:22.611319 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:22.611332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:22.611343 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:22.611355 | orchestrator | 2026-01-30 04:05:22.611366 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-30 04:05:22.611377 | orchestrator | Friday 30 January 2026 04:05:20 +0000 (0:00:01.911) 0:01:29.136 ******** 2026-01-30 04:05:22.611389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:22.611401 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:22.611417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:22.611431 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:22.611453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:45.309271 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.309390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:45.309410 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.309422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:45.309434 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.309445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:45.309457 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.309469 | orchestrator | 2026-01-30 04:05:45.309481 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-30 04:05:45.309553 | orchestrator | Friday 30 January 2026 04:05:22 +0000 (0:00:01.747) 0:01:30.883 ******** 2026-01-30 04:05:45.309576 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.309594 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.309606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.309618 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.309648 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.309659 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.309670 | orchestrator | 2026-01-30 04:05:45.309681 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-30 04:05:45.309693 | orchestrator | Friday 30 January 2026 04:05:24 +0000 (0:00:01.864) 0:01:32.748 ******** 2026-01-30 04:05:45.309704 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.309715 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.309725 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.309736 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:05:45.309747 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:05:45.309761 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:05:45.309797 | orchestrator | 2026-01-30 04:05:45.309811 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-30 04:05:45.309824 | orchestrator | Friday 30 January 2026 04:05:27 +0000 (0:00:03.296) 0:01:36.044 ******** 2026-01-30 04:05:45.309836 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.309849 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.309860 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.309870 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.309881 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.309892 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.309903 | orchestrator | 2026-01-30 04:05:45.309914 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-30 04:05:45.309925 | orchestrator | Friday 30 January 2026 04:05:29 +0000 (0:00:02.073) 0:01:38.118 ******** 2026-01-30 04:05:45.309936 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.309946 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.309957 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.309968 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.309979 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.309990 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310001 | orchestrator | 2026-01-30 04:05:45.310013 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-30 04:05:45.310107 | orchestrator | Friday 30 January 2026 04:05:31 +0000 (0:00:02.038) 0:01:40.156 ******** 2026-01-30 04:05:45.310120 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310131 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310142 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310153 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310163 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310175 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310186 | orchestrator | 2026-01-30 04:05:45.310197 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-30 04:05:45.310208 | orchestrator | Friday 30 January 2026 04:05:33 +0000 (0:00:02.064) 0:01:42.221 ******** 2026-01-30 04:05:45.310219 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310230 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310241 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310252 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310262 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310273 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310284 | orchestrator | 2026-01-30 04:05:45.310295 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-30 04:05:45.310306 | orchestrator | Friday 30 January 2026 04:05:36 +0000 (0:00:02.155) 0:01:44.377 ******** 2026-01-30 04:05:45.310317 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310329 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310339 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310350 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310361 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310372 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310383 | orchestrator | 2026-01-30 04:05:45.310394 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-30 04:05:45.310405 | orchestrator | Friday 30 January 2026 04:05:37 +0000 (0:00:01.719) 0:01:46.097 ******** 2026-01-30 04:05:45.310416 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310427 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310438 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310449 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310459 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310470 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310481 | orchestrator | 2026-01-30 04:05:45.310518 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-30 04:05:45.310540 | orchestrator | Friday 30 January 2026 04:05:39 +0000 (0:00:01.674) 0:01:47.771 ******** 2026-01-30 04:05:45.310551 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310562 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310573 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310583 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310594 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310605 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310616 | orchestrator | 2026-01-30 04:05:45.310627 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-30 04:05:45.310638 | orchestrator | Friday 30 January 2026 04:05:41 +0000 (0:00:02.028) 0:01:49.800 ******** 2026-01-30 04:05:45.310649 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310661 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310673 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310684 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:45.310695 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310707 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:45.310718 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310729 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:45.310740 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310751 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:45.310768 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-30 04:05:45.310780 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:45.310791 | orchestrator | 2026-01-30 04:05:45.310802 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-30 04:05:45.310813 | orchestrator | Friday 30 January 2026 04:05:43 +0000 (0:00:01.840) 0:01:51.641 ******** 2026-01-30 04:05:45.310825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:45.310838 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:05:45.310863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:47.746479 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:05:47.746665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-30 04:05:47.746685 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:05:47.746700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:47.746737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:47.746758 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:05:47.746777 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:05:47.746794 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 04:05:47.746813 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:05:47.746833 | orchestrator | 2026-01-30 04:05:47.746853 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-30 04:05:47.746875 | orchestrator | Friday 30 January 2026 04:05:45 +0000 (0:00:01.943) 0:01:53.584 ******** 2026-01-30 04:05:47.747022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:47.747074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:47.747098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-30 04:05:47.747112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:05:47.747125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:05:47.747161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-30 04:08:06.941104 | orchestrator | 2026-01-30 04:08:06.941185 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-30 04:08:06.941193 | orchestrator | Friday 30 January 2026 04:05:47 +0000 (0:00:02.440) 0:01:56.025 ******** 2026-01-30 04:08:06.941197 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:08:06.941204 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:08:06.941208 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:08:06.941212 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:08:06.941217 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:08:06.941221 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:08:06.941225 | orchestrator | 2026-01-30 04:08:06.941229 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-30 04:08:06.941233 | orchestrator | Friday 30 January 2026 04:05:48 +0000 (0:00:00.485) 0:01:56.510 ******** 2026-01-30 04:08:06.941238 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:08:06.941242 | orchestrator | 2026-01-30 04:08:06.941246 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-30 04:08:06.941250 | orchestrator | Friday 30 January 2026 04:05:50 +0000 (0:00:02.460) 0:01:58.971 ******** 2026-01-30 04:08:06.941254 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:08:06.941258 | orchestrator | 2026-01-30 04:08:06.941262 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-30 04:08:06.941266 | orchestrator | Friday 30 January 2026 04:05:53 +0000 (0:00:02.398) 0:02:01.370 ******** 2026-01-30 04:08:06.941270 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:08:06.941274 | orchestrator | 2026-01-30 04:08:06.941279 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941283 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:41.937) 0:02:43.307 ******** 2026-01-30 04:08:06.941287 | orchestrator | 2026-01-30 04:08:06.941291 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941295 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.068) 0:02:43.376 ******** 2026-01-30 04:08:06.941299 | orchestrator | 2026-01-30 04:08:06.941303 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941307 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.068) 0:02:43.444 ******** 2026-01-30 04:08:06.941310 | orchestrator | 2026-01-30 04:08:06.941314 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941329 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.067) 0:02:43.511 ******** 2026-01-30 04:08:06.941333 | orchestrator | 2026-01-30 04:08:06.941337 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941341 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.074) 0:02:43.586 ******** 2026-01-30 04:08:06.941345 | orchestrator | 2026-01-30 04:08:06.941349 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-30 04:08:06.941353 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.068) 0:02:43.654 ******** 2026-01-30 04:08:06.941357 | orchestrator | 2026-01-30 04:08:06.941375 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-30 04:08:06.941380 | orchestrator | Friday 30 January 2026 04:06:35 +0000 (0:00:00.069) 0:02:43.724 ******** 2026-01-30 04:08:06.941383 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:08:06.941388 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:08:06.941392 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:08:06.941396 | orchestrator | 2026-01-30 04:08:06.941400 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-30 04:08:06.941404 | orchestrator | Friday 30 January 2026 04:07:04 +0000 (0:00:29.075) 0:03:12.799 ******** 2026-01-30 04:08:06.941407 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:08:06.941411 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:08:06.941415 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:08:06.941419 | orchestrator | 2026-01-30 04:08:06.941423 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:08:06.941429 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 04:08:06.941435 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-30 04:08:06.941439 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-30 04:08:06.941443 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 04:08:06.941447 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 04:08:06.941451 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-30 04:08:06.941455 | orchestrator | 2026-01-30 04:08:06.941459 | orchestrator | 2026-01-30 04:08:06.941463 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:08:06.941466 | orchestrator | Friday 30 January 2026 04:08:06 +0000 (0:01:02.027) 0:04:14.826 ******** 2026-01-30 04:08:06.941470 | orchestrator | =============================================================================== 2026-01-30 04:08:06.941474 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.03s 2026-01-30 04:08:06.941478 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.94s 2026-01-30 04:08:06.941482 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.08s 2026-01-30 04:08:06.941496 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.09s 2026-01-30 04:08:06.941501 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.71s 2026-01-30 04:08:06.941505 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.25s 2026-01-30 04:08:06.941509 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.24s 2026-01-30 04:08:06.941513 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.02s 2026-01-30 04:08:06.941517 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.38s 2026-01-30 04:08:06.941521 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.30s 2026-01-30 04:08:06.941525 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.30s 2026-01-30 04:08:06.941529 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.28s 2026-01-30 04:08:06.941532 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.22s 2026-01-30 04:08:06.941536 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.87s 2026-01-30 04:08:06.941540 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.58s 2026-01-30 04:08:06.941548 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.49s 2026-01-30 04:08:06.941552 | orchestrator | neutron : Creating Neutron database ------------------------------------- 2.46s 2026-01-30 04:08:06.941556 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.44s 2026-01-30 04:08:06.941560 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.44s 2026-01-30 04:08:06.941564 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.40s 2026-01-30 04:08:09.687553 | orchestrator | 2026-01-30 04:08:09 | INFO  | Task 6862e25f-d66e-49de-a861-c43c61d6f319 (nova) was prepared for execution. 2026-01-30 04:08:09.687765 | orchestrator | 2026-01-30 04:08:09 | INFO  | It takes a moment until task 6862e25f-d66e-49de-a861-c43c61d6f319 (nova) has been started and output is visible here. 2026-01-30 04:10:10.552099 | orchestrator | 2026-01-30 04:10:10.552176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:10:10.552191 | orchestrator | 2026-01-30 04:10:10.552202 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-30 04:10:10.552212 | orchestrator | Friday 30 January 2026 04:08:13 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-01-30 04:10:10.552222 | orchestrator | changed: [testbed-manager] 2026-01-30 04:10:10.552233 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552242 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:10:10.552252 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:10:10.552263 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:10:10.552273 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:10:10.552283 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:10:10.552293 | orchestrator | 2026-01-30 04:10:10.552304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:10:10.552314 | orchestrator | Friday 30 January 2026 04:08:13 +0000 (0:00:00.593) 0:00:00.797 ******** 2026-01-30 04:10:10.552324 | orchestrator | changed: [testbed-manager] 2026-01-30 04:10:10.552334 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552344 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:10:10.552354 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:10:10.552364 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:10:10.552375 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:10:10.552382 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:10:10.552388 | orchestrator | 2026-01-30 04:10:10.552394 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:10:10.552400 | orchestrator | Friday 30 January 2026 04:08:14 +0000 (0:00:00.669) 0:00:01.466 ******** 2026-01-30 04:10:10.552406 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-30 04:10:10.552412 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-30 04:10:10.552418 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-30 04:10:10.552424 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-30 04:10:10.552429 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-30 04:10:10.552435 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-30 04:10:10.552441 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-30 04:10:10.552447 | orchestrator | 2026-01-30 04:10:10.552452 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-30 04:10:10.552458 | orchestrator | 2026-01-30 04:10:10.552464 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-30 04:10:10.552470 | orchestrator | Friday 30 January 2026 04:08:15 +0000 (0:00:00.616) 0:00:02.083 ******** 2026-01-30 04:10:10.552476 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:10:10.552482 | orchestrator | 2026-01-30 04:10:10.552487 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-30 04:10:10.552509 | orchestrator | Friday 30 January 2026 04:08:15 +0000 (0:00:00.563) 0:00:02.646 ******** 2026-01-30 04:10:10.552515 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-30 04:10:10.552521 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-30 04:10:10.552527 | orchestrator | 2026-01-30 04:10:10.552533 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-30 04:10:10.552539 | orchestrator | Friday 30 January 2026 04:08:19 +0000 (0:00:04.193) 0:00:06.840 ******** 2026-01-30 04:10:10.552545 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 04:10:10.552550 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 04:10:10.552556 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552562 | orchestrator | 2026-01-30 04:10:10.552568 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-30 04:10:10.552574 | orchestrator | Friday 30 January 2026 04:08:24 +0000 (0:00:04.395) 0:00:11.236 ******** 2026-01-30 04:10:10.552580 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552586 | orchestrator | 2026-01-30 04:10:10.552591 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-30 04:10:10.552597 | orchestrator | Friday 30 January 2026 04:08:24 +0000 (0:00:00.638) 0:00:11.874 ******** 2026-01-30 04:10:10.552603 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552609 | orchestrator | 2026-01-30 04:10:10.552615 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-30 04:10:10.552620 | orchestrator | Friday 30 January 2026 04:08:26 +0000 (0:00:01.241) 0:00:13.115 ******** 2026-01-30 04:10:10.552626 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552632 | orchestrator | 2026-01-30 04:10:10.552638 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-30 04:10:10.552644 | orchestrator | Friday 30 January 2026 04:08:28 +0000 (0:00:02.576) 0:00:15.691 ******** 2026-01-30 04:10:10.552649 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.552655 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.552661 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.552667 | orchestrator | 2026-01-30 04:10:10.552673 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-30 04:10:10.552700 | orchestrator | Friday 30 January 2026 04:08:29 +0000 (0:00:00.266) 0:00:15.957 ******** 2026-01-30 04:10:10.552707 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:10:10.552721 | orchestrator | 2026-01-30 04:10:10.552733 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-30 04:10:10.552740 | orchestrator | Friday 30 January 2026 04:09:02 +0000 (0:00:33.474) 0:00:49.431 ******** 2026-01-30 04:10:10.552746 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.552753 | orchestrator | 2026-01-30 04:10:10.552759 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-30 04:10:10.552766 | orchestrator | Friday 30 January 2026 04:09:18 +0000 (0:00:15.851) 0:01:05.282 ******** 2026-01-30 04:10:10.552772 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:10:10.552779 | orchestrator | 2026-01-30 04:10:10.552785 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-30 04:10:10.552801 | orchestrator | Friday 30 January 2026 04:09:31 +0000 (0:00:12.902) 0:01:18.185 ******** 2026-01-30 04:10:10.552819 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:10:10.552826 | orchestrator | 2026-01-30 04:10:10.552833 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-30 04:10:10.552839 | orchestrator | Friday 30 January 2026 04:09:31 +0000 (0:00:00.642) 0:01:18.827 ******** 2026-01-30 04:10:10.552846 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.552853 | orchestrator | 2026-01-30 04:10:10.552859 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-30 04:10:10.552866 | orchestrator | Friday 30 January 2026 04:09:32 +0000 (0:00:00.431) 0:01:19.258 ******** 2026-01-30 04:10:10.552873 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:10:10.552884 | orchestrator | 2026-01-30 04:10:10.552891 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-30 04:10:10.552897 | orchestrator | Friday 30 January 2026 04:09:33 +0000 (0:00:00.656) 0:01:19.915 ******** 2026-01-30 04:10:10.552904 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:10:10.552911 | orchestrator | 2026-01-30 04:10:10.552917 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-30 04:10:10.552923 | orchestrator | Friday 30 January 2026 04:09:51 +0000 (0:00:18.891) 0:01:38.807 ******** 2026-01-30 04:10:10.552930 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.552936 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.552943 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.552949 | orchestrator | 2026-01-30 04:10:10.552956 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-30 04:10:10.552963 | orchestrator | 2026-01-30 04:10:10.552970 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-30 04:10:10.552977 | orchestrator | Friday 30 January 2026 04:09:52 +0000 (0:00:00.297) 0:01:39.104 ******** 2026-01-30 04:10:10.552983 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:10:10.552990 | orchestrator | 2026-01-30 04:10:10.552997 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-30 04:10:10.553003 | orchestrator | Friday 30 January 2026 04:09:52 +0000 (0:00:00.678) 0:01:39.782 ******** 2026-01-30 04:10:10.553010 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553017 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553023 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.553030 | orchestrator | 2026-01-30 04:10:10.553037 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-30 04:10:10.553043 | orchestrator | Friday 30 January 2026 04:09:54 +0000 (0:00:02.096) 0:01:41.879 ******** 2026-01-30 04:10:10.553050 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553056 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553061 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.553067 | orchestrator | 2026-01-30 04:10:10.553073 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-30 04:10:10.553079 | orchestrator | Friday 30 January 2026 04:09:57 +0000 (0:00:02.146) 0:01:44.025 ******** 2026-01-30 04:10:10.553085 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.553091 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553097 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553103 | orchestrator | 2026-01-30 04:10:10.553108 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-30 04:10:10.553114 | orchestrator | Friday 30 January 2026 04:09:57 +0000 (0:00:00.492) 0:01:44.518 ******** 2026-01-30 04:10:10.553120 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 04:10:10.553126 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553132 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 04:10:10.553138 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553144 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 04:10:10.553150 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-30 04:10:10.553156 | orchestrator | 2026-01-30 04:10:10.553161 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-30 04:10:10.553167 | orchestrator | Friday 30 January 2026 04:10:05 +0000 (0:00:08.001) 0:01:52.520 ******** 2026-01-30 04:10:10.553173 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.553179 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553185 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553191 | orchestrator | 2026-01-30 04:10:10.553197 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-30 04:10:10.553203 | orchestrator | Friday 30 January 2026 04:10:05 +0000 (0:00:00.330) 0:01:52.850 ******** 2026-01-30 04:10:10.553209 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-30 04:10:10.553218 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:10:10.553224 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 04:10:10.553230 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553236 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 04:10:10.553242 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553248 | orchestrator | 2026-01-30 04:10:10.553254 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-30 04:10:10.553259 | orchestrator | Friday 30 January 2026 04:10:06 +0000 (0:00:00.982) 0:01:53.833 ******** 2026-01-30 04:10:10.553265 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553271 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553277 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.553283 | orchestrator | 2026-01-30 04:10:10.553289 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-30 04:10:10.553295 | orchestrator | Friday 30 January 2026 04:10:07 +0000 (0:00:00.433) 0:01:54.266 ******** 2026-01-30 04:10:10.553301 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553307 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553313 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:10:10.553318 | orchestrator | 2026-01-30 04:10:10.553324 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-30 04:10:10.553330 | orchestrator | Friday 30 January 2026 04:10:08 +0000 (0:00:00.950) 0:01:55.216 ******** 2026-01-30 04:10:10.553336 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:10:10.553342 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:10:10.553352 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:11:31.362650 | orchestrator | 2026-01-30 04:11:31.362848 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-30 04:11:31.362873 | orchestrator | Friday 30 January 2026 04:10:10 +0000 (0:00:02.229) 0:01:57.446 ******** 2026-01-30 04:11:31.362886 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.362900 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.362911 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:11:31.362923 | orchestrator | 2026-01-30 04:11:31.362941 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-30 04:11:31.362960 | orchestrator | Friday 30 January 2026 04:10:32 +0000 (0:00:21.633) 0:02:19.079 ******** 2026-01-30 04:11:31.362979 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.362997 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.363016 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:11:31.363035 | orchestrator | 2026-01-30 04:11:31.363048 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-30 04:11:31.363059 | orchestrator | Friday 30 January 2026 04:10:45 +0000 (0:00:12.954) 0:02:32.033 ******** 2026-01-30 04:11:31.363070 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:11:31.363082 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.363093 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.363104 | orchestrator | 2026-01-30 04:11:31.363115 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-30 04:11:31.363126 | orchestrator | Friday 30 January 2026 04:10:46 +0000 (0:00:00.881) 0:02:32.915 ******** 2026-01-30 04:11:31.363137 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.363149 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.363160 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:11:31.363171 | orchestrator | 2026-01-30 04:11:31.363184 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-30 04:11:31.363197 | orchestrator | Friday 30 January 2026 04:10:58 +0000 (0:00:12.699) 0:02:45.614 ******** 2026-01-30 04:11:31.363214 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:31.363233 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.363254 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.363272 | orchestrator | 2026-01-30 04:11:31.363292 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-30 04:11:31.363346 | orchestrator | Friday 30 January 2026 04:10:59 +0000 (0:00:00.954) 0:02:46.568 ******** 2026-01-30 04:11:31.363390 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:31.363411 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:31.363429 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:31.363441 | orchestrator | 2026-01-30 04:11:31.363452 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-30 04:11:31.363463 | orchestrator | 2026-01-30 04:11:31.363474 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-30 04:11:31.363485 | orchestrator | Friday 30 January 2026 04:10:59 +0000 (0:00:00.290) 0:02:46.859 ******** 2026-01-30 04:11:31.363496 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:11:31.363508 | orchestrator | 2026-01-30 04:11:31.363519 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-30 04:11:31.363530 | orchestrator | Friday 30 January 2026 04:11:00 +0000 (0:00:00.676) 0:02:47.536 ******** 2026-01-30 04:11:31.363541 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-30 04:11:31.363552 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-30 04:11:31.363563 | orchestrator | 2026-01-30 04:11:31.363574 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-30 04:11:31.363585 | orchestrator | Friday 30 January 2026 04:11:04 +0000 (0:00:03.619) 0:02:51.155 ******** 2026-01-30 04:11:31.363596 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-30 04:11:31.363719 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-30 04:11:31.363792 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-30 04:11:31.363805 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-30 04:11:31.363816 | orchestrator | 2026-01-30 04:11:31.363827 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-30 04:11:31.363838 | orchestrator | Friday 30 January 2026 04:11:11 +0000 (0:00:07.177) 0:02:58.333 ******** 2026-01-30 04:11:31.363849 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:11:31.363860 | orchestrator | 2026-01-30 04:11:31.363871 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-30 04:11:31.363882 | orchestrator | Friday 30 January 2026 04:11:15 +0000 (0:00:03.616) 0:03:01.949 ******** 2026-01-30 04:11:31.363893 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:11:31.363904 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-30 04:11:31.363915 | orchestrator | 2026-01-30 04:11:31.363926 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-30 04:11:31.363936 | orchestrator | Friday 30 January 2026 04:11:18 +0000 (0:00:03.918) 0:03:05.868 ******** 2026-01-30 04:11:31.363947 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:11:31.363958 | orchestrator | 2026-01-30 04:11:31.363969 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-30 04:11:31.363980 | orchestrator | Friday 30 January 2026 04:11:22 +0000 (0:00:03.357) 0:03:09.225 ******** 2026-01-30 04:11:31.363990 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-30 04:11:31.364001 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-30 04:11:31.364012 | orchestrator | 2026-01-30 04:11:31.364028 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-30 04:11:31.364062 | orchestrator | Friday 30 January 2026 04:11:30 +0000 (0:00:07.789) 0:03:17.015 ******** 2026-01-30 04:11:31.364080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:31.364113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:31.364127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:31.364154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654372 | orchestrator | 2026-01-30 04:11:35.654383 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-30 04:11:35.654394 | orchestrator | Friday 30 January 2026 04:11:31 +0000 (0:00:01.241) 0:03:18.256 ******** 2026-01-30 04:11:35.654403 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:35.654413 | orchestrator | 2026-01-30 04:11:35.654422 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-30 04:11:35.654431 | orchestrator | Friday 30 January 2026 04:11:31 +0000 (0:00:00.132) 0:03:18.388 ******** 2026-01-30 04:11:35.654439 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:35.654448 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:35.654457 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:35.654466 | orchestrator | 2026-01-30 04:11:35.654474 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-30 04:11:35.654483 | orchestrator | Friday 30 January 2026 04:11:31 +0000 (0:00:00.278) 0:03:18.667 ******** 2026-01-30 04:11:35.654492 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:11:35.654501 | orchestrator | 2026-01-30 04:11:35.654509 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-30 04:11:35.654518 | orchestrator | Friday 30 January 2026 04:11:32 +0000 (0:00:00.641) 0:03:19.308 ******** 2026-01-30 04:11:35.654527 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:35.654536 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:35.654545 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:35.654553 | orchestrator | 2026-01-30 04:11:35.654562 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-30 04:11:35.654571 | orchestrator | Friday 30 January 2026 04:11:32 +0000 (0:00:00.447) 0:03:19.756 ******** 2026-01-30 04:11:35.654580 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:11:35.654590 | orchestrator | 2026-01-30 04:11:35.654599 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-30 04:11:35.654608 | orchestrator | Friday 30 January 2026 04:11:33 +0000 (0:00:00.532) 0:03:20.288 ******** 2026-01-30 04:11:35.654635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:35.654682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:35.654694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:35.654704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:35.654773 | orchestrator | 2026-01-30 04:11:35.654788 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-30 04:11:37.241575 | orchestrator | Friday 30 January 2026 04:11:35 +0000 (0:00:02.265) 0:03:22.554 ******** 2026-01-30 04:11:37.241688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:37.241711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:37.241840 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:37.241868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:37.241922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:37.241936 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:37.241971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:37.241985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:37.241996 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:37.242008 | orchestrator | 2026-01-30 04:11:37.242081 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-30 04:11:37.242094 | orchestrator | Friday 30 January 2026 04:11:36 +0000 (0:00:00.842) 0:03:23.397 ******** 2026-01-30 04:11:37.242106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:37.242130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:37.242142 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:37.242169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:39.534537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:39.534625 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:39.534639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:39.534670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:39.534680 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:39.534689 | orchestrator | 2026-01-30 04:11:39.534698 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-30 04:11:39.534707 | orchestrator | Friday 30 January 2026 04:11:37 +0000 (0:00:00.744) 0:03:24.141 ******** 2026-01-30 04:11:39.534728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:39.534819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:39.534830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:39.534850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:39.534860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:39.534874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:45.696092 | orchestrator | 2026-01-30 04:11:45.696182 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-30 04:11:45.696191 | orchestrator | Friday 30 January 2026 04:11:39 +0000 (0:00:02.291) 0:03:26.432 ******** 2026-01-30 04:11:45.696199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:45.696222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:45.696239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:45.696256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:45.696263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:45.696272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:45.696277 | orchestrator | 2026-01-30 04:11:45.696282 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-30 04:11:45.696287 | orchestrator | Friday 30 January 2026 04:11:45 +0000 (0:00:05.583) 0:03:32.016 ******** 2026-01-30 04:11:45.696294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:45.696300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:45.696305 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:45.696316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:49.801583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:49.801701 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:49.801726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-30 04:11:49.801826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:11:49.801846 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:49.801861 | orchestrator | 2026-01-30 04:11:49.801878 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-30 04:11:49.801894 | orchestrator | Friday 30 January 2026 04:11:45 +0000 (0:00:00.578) 0:03:32.595 ******** 2026-01-30 04:11:49.801909 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:11:49.801925 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:11:49.801969 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:11:49.801985 | orchestrator | 2026-01-30 04:11:49.802131 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-30 04:11:49.802149 | orchestrator | Friday 30 January 2026 04:11:47 +0000 (0:00:01.462) 0:03:34.057 ******** 2026-01-30 04:11:49.802167 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:11:49.802184 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:11:49.802202 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:11:49.802219 | orchestrator | 2026-01-30 04:11:49.802237 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-30 04:11:49.802254 | orchestrator | Friday 30 January 2026 04:11:47 +0000 (0:00:00.295) 0:03:34.352 ******** 2026-01-30 04:11:49.802327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:49.802350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:49.802379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-30 04:11:49.802400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:49.802431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:11:49.802461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:33.361700 | orchestrator | 2026-01-30 04:12:33.361824 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-30 04:12:33.361839 | orchestrator | Friday 30 January 2026 04:11:49 +0000 (0:00:01.927) 0:03:36.280 ******** 2026-01-30 04:12:33.361848 | orchestrator | 2026-01-30 04:12:33.361858 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-30 04:12:33.361867 | orchestrator | Friday 30 January 2026 04:11:49 +0000 (0:00:00.151) 0:03:36.432 ******** 2026-01-30 04:12:33.361876 | orchestrator | 2026-01-30 04:12:33.361885 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-30 04:12:33.361894 | orchestrator | Friday 30 January 2026 04:11:49 +0000 (0:00:00.132) 0:03:36.564 ******** 2026-01-30 04:12:33.361903 | orchestrator | 2026-01-30 04:12:33.361915 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-30 04:12:33.361930 | orchestrator | Friday 30 January 2026 04:11:49 +0000 (0:00:00.132) 0:03:36.697 ******** 2026-01-30 04:12:33.361945 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:12:33.361961 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:12:33.361975 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:12:33.361989 | orchestrator | 2026-01-30 04:12:33.362005 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-30 04:12:33.362084 | orchestrator | Friday 30 January 2026 04:12:12 +0000 (0:00:22.519) 0:03:59.216 ******** 2026-01-30 04:12:33.362095 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:12:33.362104 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:12:33.362113 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:12:33.362122 | orchestrator | 2026-01-30 04:12:33.362131 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-30 04:12:33.362140 | orchestrator | 2026-01-30 04:12:33.362148 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-30 04:12:33.362158 | orchestrator | Friday 30 January 2026 04:12:22 +0000 (0:00:10.330) 0:04:09.547 ******** 2026-01-30 04:12:33.362168 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:12:33.362179 | orchestrator | 2026-01-30 04:12:33.362202 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-30 04:12:33.362212 | orchestrator | Friday 30 January 2026 04:12:23 +0000 (0:00:01.118) 0:04:10.666 ******** 2026-01-30 04:12:33.362221 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:12:33.362252 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:12:33.362264 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:12:33.362274 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:33.362284 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:33.362294 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:33.362304 | orchestrator | 2026-01-30 04:12:33.362315 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-30 04:12:33.362325 | orchestrator | Friday 30 January 2026 04:12:24 +0000 (0:00:00.566) 0:04:11.232 ******** 2026-01-30 04:12:33.362335 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:33.362344 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:33.362354 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:33.362364 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:12:33.362375 | orchestrator | 2026-01-30 04:12:33.362385 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-30 04:12:33.362395 | orchestrator | Friday 30 January 2026 04:12:25 +0000 (0:00:00.955) 0:04:12.188 ******** 2026-01-30 04:12:33.362406 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-30 04:12:33.362417 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-30 04:12:33.362427 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-30 04:12:33.362437 | orchestrator | 2026-01-30 04:12:33.362447 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-30 04:12:33.362458 | orchestrator | Friday 30 January 2026 04:12:25 +0000 (0:00:00.635) 0:04:12.823 ******** 2026-01-30 04:12:33.362466 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-30 04:12:33.362475 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-30 04:12:33.362484 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-30 04:12:33.362492 | orchestrator | 2026-01-30 04:12:33.362501 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-30 04:12:33.362510 | orchestrator | Friday 30 January 2026 04:12:27 +0000 (0:00:01.461) 0:04:14.285 ******** 2026-01-30 04:12:33.362519 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-30 04:12:33.362527 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:12:33.362536 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-30 04:12:33.362545 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:12:33.362554 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-30 04:12:33.362563 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:12:33.362572 | orchestrator | 2026-01-30 04:12:33.362581 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-30 04:12:33.362590 | orchestrator | Friday 30 January 2026 04:12:27 +0000 (0:00:00.538) 0:04:14.823 ******** 2026-01-30 04:12:33.362599 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-30 04:12:33.362608 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 04:12:33.362617 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 04:12:33.362626 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-30 04:12:33.362634 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-30 04:12:33.362643 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:33.362652 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 04:12:33.362678 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 04:12:33.362687 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:33.362696 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 04:12:33.362704 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 04:12:33.362720 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:33.362729 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-30 04:12:33.362738 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-30 04:12:33.362747 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-30 04:12:33.362755 | orchestrator | 2026-01-30 04:12:33.362838 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-30 04:12:33.362849 | orchestrator | Friday 30 January 2026 04:12:28 +0000 (0:00:01.009) 0:04:15.832 ******** 2026-01-30 04:12:33.362858 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:33.362866 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:33.362875 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:33.362884 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:12:33.362893 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:12:33.362901 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:12:33.362913 | orchestrator | 2026-01-30 04:12:33.362928 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-30 04:12:33.362943 | orchestrator | Friday 30 January 2026 04:12:30 +0000 (0:00:01.161) 0:04:16.994 ******** 2026-01-30 04:12:33.362957 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:33.362971 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:33.362985 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:33.363001 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:12:33.363015 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:12:33.363031 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:12:33.363040 | orchestrator | 2026-01-30 04:12:33.363049 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-30 04:12:33.363058 | orchestrator | Friday 30 January 2026 04:12:31 +0000 (0:00:01.444) 0:04:18.439 ******** 2026-01-30 04:12:33.363075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:33.363091 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:33.363110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:34.850576 | orchestrator | 2026-01-30 04:12:34.850593 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-30 04:12:34.850609 | orchestrator | Friday 30 January 2026 04:12:33 +0000 (0:00:02.114) 0:04:20.553 ******** 2026-01-30 04:12:34.850625 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:12:34.850641 | orchestrator | 2026-01-30 04:12:34.850655 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-30 04:12:34.850680 | orchestrator | Friday 30 January 2026 04:12:34 +0000 (0:00:01.194) 0:04:21.747 ******** 2026-01-30 04:12:38.088056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088179 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088310 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:38.088432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:39.593027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:39.593136 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:12:39.593153 | orchestrator | 2026-01-30 04:12:39.593167 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-30 04:12:39.593181 | orchestrator | Friday 30 January 2026 04:12:38 +0000 (0:00:03.502) 0:04:25.250 ******** 2026-01-30 04:12:39.593215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:39.593229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:39.593243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:39.593255 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:12:39.593292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:39.593305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:39.593317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:39.593336 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:12:39.593348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:39.593360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:39.593380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:40.795971 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:12:40.796075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:40.796116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:40.796143 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:40.796151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:40.796158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:40.796164 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:40.796171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:40.796178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:40.796185 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:40.796191 | orchestrator | 2026-01-30 04:12:40.796200 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-30 04:12:40.796208 | orchestrator | Friday 30 January 2026 04:12:39 +0000 (0:00:01.333) 0:04:26.584 ******** 2026-01-30 04:12:40.796233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:40.796246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:40.796255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:40.796262 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:12:40.796268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:40.796275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:40.796289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:12:47.926592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:47.926676 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:12:47.926688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:12:47.926696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:12:47.926702 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:12:47.926710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:47.926717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:47.926723 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:47.926763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:47.926822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:47.926830 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:47.926836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:12:47.926842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:12:47.926848 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:47.926854 | orchestrator | 2026-01-30 04:12:47.926861 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-30 04:12:47.926868 | orchestrator | Friday 30 January 2026 04:12:41 +0000 (0:00:01.733) 0:04:28.317 ******** 2026-01-30 04:12:47.926874 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:12:47.926880 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:12:47.926886 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:12:47.926893 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:12:47.926899 | orchestrator | 2026-01-30 04:12:47.926904 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-30 04:12:47.926910 | orchestrator | Friday 30 January 2026 04:12:42 +0000 (0:00:01.017) 0:04:29.335 ******** 2026-01-30 04:12:47.926916 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:12:47.926922 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:12:47.926928 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:12:47.926934 | orchestrator | 2026-01-30 04:12:47.926940 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-30 04:12:47.926946 | orchestrator | Friday 30 January 2026 04:12:43 +0000 (0:00:00.897) 0:04:30.232 ******** 2026-01-30 04:12:47.926952 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:12:47.926957 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:12:47.926963 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:12:47.926969 | orchestrator | 2026-01-30 04:12:47.926975 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-30 04:12:47.926985 | orchestrator | Friday 30 January 2026 04:12:44 +0000 (0:00:01.048) 0:04:31.280 ******** 2026-01-30 04:12:47.926991 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:12:47.926998 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:12:47.927003 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:12:47.927012 | orchestrator | 2026-01-30 04:12:47.927021 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-30 04:12:47.927030 | orchestrator | Friday 30 January 2026 04:12:44 +0000 (0:00:00.497) 0:04:31.778 ******** 2026-01-30 04:12:47.927048 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:12:47.927057 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:12:47.927067 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:12:47.927077 | orchestrator | 2026-01-30 04:12:47.927088 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-30 04:12:47.927098 | orchestrator | Friday 30 January 2026 04:12:45 +0000 (0:00:00.508) 0:04:32.286 ******** 2026-01-30 04:12:47.927109 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-30 04:12:47.927120 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-30 04:12:47.927126 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-30 04:12:47.927132 | orchestrator | 2026-01-30 04:12:47.927143 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-30 04:12:47.927149 | orchestrator | Friday 30 January 2026 04:12:46 +0000 (0:00:01.130) 0:04:33.416 ******** 2026-01-30 04:12:47.927156 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-30 04:12:47.927163 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-30 04:12:47.927175 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-30 04:13:05.142305 | orchestrator | 2026-01-30 04:13:05.142404 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-30 04:13:05.142415 | orchestrator | Friday 30 January 2026 04:12:47 +0000 (0:00:01.408) 0:04:34.825 ******** 2026-01-30 04:13:05.142422 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-30 04:13:05.142430 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-30 04:13:05.142437 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-30 04:13:05.142443 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-30 04:13:05.142449 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-30 04:13:05.142455 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-30 04:13:05.142462 | orchestrator | 2026-01-30 04:13:05.142467 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-30 04:13:05.142474 | orchestrator | Friday 30 January 2026 04:12:51 +0000 (0:00:03.585) 0:04:38.410 ******** 2026-01-30 04:13:05.142480 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:05.142488 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:05.142494 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:05.142500 | orchestrator | 2026-01-30 04:13:05.142506 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-30 04:13:05.142511 | orchestrator | Friday 30 January 2026 04:12:51 +0000 (0:00:00.285) 0:04:38.696 ******** 2026-01-30 04:13:05.142518 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:05.142524 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:05.142530 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:05.142537 | orchestrator | 2026-01-30 04:13:05.142543 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-30 04:13:05.142549 | orchestrator | Friday 30 January 2026 04:12:52 +0000 (0:00:00.285) 0:04:38.981 ******** 2026-01-30 04:13:05.142556 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:13:05.142562 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:13:05.142568 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:13:05.142574 | orchestrator | 2026-01-30 04:13:05.142581 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-30 04:13:05.142610 | orchestrator | Friday 30 January 2026 04:12:53 +0000 (0:00:01.391) 0:04:40.373 ******** 2026-01-30 04:13:05.142618 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-30 04:13:05.142626 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-30 04:13:05.142632 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-30 04:13:05.142639 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-30 04:13:05.142645 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-30 04:13:05.142652 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-30 04:13:05.142658 | orchestrator | 2026-01-30 04:13:05.142664 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-30 04:13:05.142670 | orchestrator | Friday 30 January 2026 04:12:56 +0000 (0:00:03.042) 0:04:43.416 ******** 2026-01-30 04:13:05.142677 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 04:13:05.142683 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 04:13:05.142689 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 04:13:05.142695 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-30 04:13:05.142701 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:13:05.142708 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-30 04:13:05.142714 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:13:05.142720 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-30 04:13:05.142726 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:13:05.142732 | orchestrator | 2026-01-30 04:13:05.142737 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-30 04:13:05.142743 | orchestrator | Friday 30 January 2026 04:12:59 +0000 (0:00:03.130) 0:04:46.546 ******** 2026-01-30 04:13:05.142748 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:05.142754 | orchestrator | 2026-01-30 04:13:05.142760 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-30 04:13:05.142766 | orchestrator | Friday 30 January 2026 04:12:59 +0000 (0:00:00.123) 0:04:46.670 ******** 2026-01-30 04:13:05.142772 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:05.142860 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:05.142869 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:05.142876 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:05.142895 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:05.142910 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:05.142924 | orchestrator | 2026-01-30 04:13:05.142931 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-30 04:13:05.142937 | orchestrator | Friday 30 January 2026 04:13:00 +0000 (0:00:00.738) 0:04:47.409 ******** 2026-01-30 04:13:05.142944 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:13:05.142950 | orchestrator | 2026-01-30 04:13:05.142970 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-30 04:13:05.142978 | orchestrator | Friday 30 January 2026 04:13:01 +0000 (0:00:00.644) 0:04:48.054 ******** 2026-01-30 04:13:05.142985 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:05.142992 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:05.142999 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:05.143005 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:05.143030 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:05.143037 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:05.143043 | orchestrator | 2026-01-30 04:13:05.143049 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-30 04:13:05.143063 | orchestrator | Friday 30 January 2026 04:13:01 +0000 (0:00:00.563) 0:04:48.617 ******** 2026-01-30 04:13:05.143073 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:05.143083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:05.143090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:05.143098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:05.143113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:09.328496 | orchestrator | 2026-01-30 04:13:09.328509 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-30 04:13:09.328522 | orchestrator | Friday 30 January 2026 04:13:05 +0000 (0:00:03.583) 0:04:52.200 ******** 2026-01-30 04:13:09.328534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:09.328552 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:09.328579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:11.165994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:11.166170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:11.166191 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:11.166205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:13:11.166389 | orchestrator | 2026-01-30 04:13:11.166402 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-30 04:13:11.166421 | orchestrator | Friday 30 January 2026 04:13:11 +0000 (0:00:05.866) 0:04:58.067 ******** 2026-01-30 04:13:30.774719 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:30.774850 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:30.774861 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:30.774868 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.774875 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.774882 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.774888 | orchestrator | 2026-01-30 04:13:30.774896 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-30 04:13:30.774904 | orchestrator | Friday 30 January 2026 04:13:12 +0000 (0:00:01.363) 0:04:59.431 ******** 2026-01-30 04:13:30.774911 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-30 04:13:30.774918 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-30 04:13:30.774925 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-30 04:13:30.774931 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-30 04:13:30.774938 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-30 04:13:30.774945 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.774952 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-30 04:13:30.774958 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-30 04:13:30.774965 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-30 04:13:30.774971 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.774977 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-30 04:13:30.774984 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.774990 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-30 04:13:30.775013 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-30 04:13:30.775020 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-30 04:13:30.775027 | orchestrator | 2026-01-30 04:13:30.775034 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-30 04:13:30.775040 | orchestrator | Friday 30 January 2026 04:13:15 +0000 (0:00:03.410) 0:05:02.841 ******** 2026-01-30 04:13:30.775047 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:30.775053 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:30.775059 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:30.775066 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775072 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775078 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775084 | orchestrator | 2026-01-30 04:13:30.775091 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-30 04:13:30.775097 | orchestrator | Friday 30 January 2026 04:13:16 +0000 (0:00:00.500) 0:05:03.341 ******** 2026-01-30 04:13:30.775103 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-30 04:13:30.775110 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-30 04:13:30.775116 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-30 04:13:30.775123 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-30 04:13:30.775129 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-30 04:13:30.775147 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-30 04:13:30.775153 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775160 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775166 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775172 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775179 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775185 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775191 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775197 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-30 04:13:30.775204 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775210 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775217 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775235 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775241 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775248 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775254 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-30 04:13:30.775260 | orchestrator | 2026-01-30 04:13:30.775266 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-30 04:13:30.775278 | orchestrator | Friday 30 January 2026 04:13:21 +0000 (0:00:05.140) 0:05:08.482 ******** 2026-01-30 04:13:30.775285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 04:13:30.775293 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 04:13:30.775300 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 04:13:30.775307 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-30 04:13:30.775313 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-30 04:13:30.775320 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 04:13:30.775327 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 04:13:30.775334 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-30 04:13:30.775342 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-30 04:13:30.775349 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 04:13:30.775355 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 04:13:30.775362 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 04:13:30.775369 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-30 04:13:30.775376 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775383 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-30 04:13:30.775390 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775397 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-30 04:13:30.775404 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775411 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 04:13:30.775418 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 04:13:30.775425 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-30 04:13:30.775432 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 04:13:30.775439 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 04:13:30.775446 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-30 04:13:30.775453 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 04:13:30.775460 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 04:13:30.775470 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-30 04:13:30.775478 | orchestrator | 2026-01-30 04:13:30.775485 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-30 04:13:30.775492 | orchestrator | Friday 30 January 2026 04:13:27 +0000 (0:00:06.331) 0:05:14.813 ******** 2026-01-30 04:13:30.775499 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:30.775506 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:30.775513 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:30.775520 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775527 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775534 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775541 | orchestrator | 2026-01-30 04:13:30.775547 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-30 04:13:30.775559 | orchestrator | Friday 30 January 2026 04:13:28 +0000 (0:00:00.602) 0:05:15.415 ******** 2026-01-30 04:13:30.775566 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:30.775573 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:30.775580 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:30.775587 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775594 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775601 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775608 | orchestrator | 2026-01-30 04:13:30.775615 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-30 04:13:30.775622 | orchestrator | Friday 30 January 2026 04:13:29 +0000 (0:00:00.503) 0:05:15.919 ******** 2026-01-30 04:13:30.775629 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:30.775637 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:30.775643 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:13:30.775650 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:30.775656 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:13:30.775662 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:13:30.775668 | orchestrator | 2026-01-30 04:13:30.775678 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-30 04:13:31.686593 | orchestrator | Friday 30 January 2026 04:13:30 +0000 (0:00:01.743) 0:05:17.663 ******** 2026-01-30 04:13:31.686667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:31.686678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:31.686686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:13:31.686693 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:31.686714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:31.686737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:31.686755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:13:31.686761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-30 04:13:31.686767 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:31.686772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-30 04:13:31.686781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-30 04:13:31.686828 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:31.686855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:13:31.686866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:13:34.820266 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:34.820336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:13:34.820343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:13:34.820347 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:34.820351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-30 04:13:34.820355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:13:34.820378 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:34.820383 | orchestrator | 2026-01-30 04:13:34.820388 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-30 04:13:34.820393 | orchestrator | Friday 30 January 2026 04:13:31 +0000 (0:00:01.179) 0:05:18.843 ******** 2026-01-30 04:13:34.820407 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-30 04:13:34.820411 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820415 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:13:34.820419 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-30 04:13:34.820423 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820427 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:13:34.820431 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-30 04:13:34.820435 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820439 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:13:34.820443 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-30 04:13:34.820447 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820451 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:13:34.820455 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-30 04:13:34.820459 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820463 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:13:34.820467 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-30 04:13:34.820471 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-30 04:13:34.820474 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:13:34.820478 | orchestrator | 2026-01-30 04:13:34.820483 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-30 04:13:34.820487 | orchestrator | Friday 30 January 2026 04:13:32 +0000 (0:00:00.671) 0:05:19.515 ******** 2026-01-30 04:13:34.820503 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:34.820510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:34.820521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-30 04:13:34.820531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:34.820537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:13:34.820548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-30 04:14:23.084726 | orchestrator | 2026-01-30 04:14:23.084734 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-30 04:14:23.084741 | orchestrator | Friday 30 January 2026 04:13:35 +0000 (0:00:02.486) 0:05:22.001 ******** 2026-01-30 04:14:23.084748 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:14:23.084755 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:14:23.084761 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:14:23.084767 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:14:23.084772 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:14:23.084778 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:14:23.084784 | orchestrator | 2026-01-30 04:14:23.084790 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084796 | orchestrator | Friday 30 January 2026 04:13:35 +0000 (0:00:00.618) 0:05:22.619 ******** 2026-01-30 04:14:23.084802 | orchestrator | 2026-01-30 04:14:23.084808 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084863 | orchestrator | Friday 30 January 2026 04:13:35 +0000 (0:00:00.127) 0:05:22.747 ******** 2026-01-30 04:14:23.084877 | orchestrator | 2026-01-30 04:14:23.084888 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084898 | orchestrator | Friday 30 January 2026 04:13:35 +0000 (0:00:00.132) 0:05:22.880 ******** 2026-01-30 04:14:23.084907 | orchestrator | 2026-01-30 04:14:23.084916 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084925 | orchestrator | Friday 30 January 2026 04:13:36 +0000 (0:00:00.135) 0:05:23.016 ******** 2026-01-30 04:14:23.084936 | orchestrator | 2026-01-30 04:14:23.084946 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084956 | orchestrator | Friday 30 January 2026 04:13:36 +0000 (0:00:00.135) 0:05:23.151 ******** 2026-01-30 04:14:23.084966 | orchestrator | 2026-01-30 04:14:23.084975 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-30 04:14:23.084986 | orchestrator | Friday 30 January 2026 04:13:36 +0000 (0:00:00.268) 0:05:23.420 ******** 2026-01-30 04:14:23.084995 | orchestrator | 2026-01-30 04:14:23.085006 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-30 04:14:23.085018 | orchestrator | Friday 30 January 2026 04:13:36 +0000 (0:00:00.134) 0:05:23.554 ******** 2026-01-30 04:14:23.085030 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:14:23.085041 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:14:23.085052 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:14:23.085062 | orchestrator | 2026-01-30 04:14:23.085073 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-30 04:14:23.085083 | orchestrator | Friday 30 January 2026 04:13:48 +0000 (0:00:11.627) 0:05:35.181 ******** 2026-01-30 04:14:23.085094 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:14:23.085106 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:14:23.085118 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:14:23.085138 | orchestrator | 2026-01-30 04:14:23.085149 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-30 04:14:23.085160 | orchestrator | Friday 30 January 2026 04:14:02 +0000 (0:00:14.616) 0:05:49.798 ******** 2026-01-30 04:14:23.085172 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:14:23.085183 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:14:23.085195 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:14:23.085206 | orchestrator | 2026-01-30 04:14:23.085226 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-30 04:16:41.331491 | orchestrator | Friday 30 January 2026 04:14:23 +0000 (0:00:20.179) 0:06:09.977 ******** 2026-01-30 04:16:41.331606 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:16:41.331623 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:16:41.331635 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:16:41.331646 | orchestrator | 2026-01-30 04:16:41.331659 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-30 04:16:41.331671 | orchestrator | Friday 30 January 2026 04:14:59 +0000 (0:00:36.290) 0:06:46.268 ******** 2026-01-30 04:16:41.331682 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:16:41.331693 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-01-30 04:16:41.331705 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:16:41.331716 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:16:41.331726 | orchestrator | 2026-01-30 04:16:41.331737 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-30 04:16:41.331748 | orchestrator | Friday 30 January 2026 04:15:05 +0000 (0:00:06.163) 0:06:52.432 ******** 2026-01-30 04:16:41.331759 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:16:41.331770 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:16:41.331781 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:16:41.331792 | orchestrator | 2026-01-30 04:16:41.331803 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-30 04:16:41.331814 | orchestrator | Friday 30 January 2026 04:15:06 +0000 (0:00:00.747) 0:06:53.179 ******** 2026-01-30 04:16:41.331825 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:16:41.331836 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:16:41.331847 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:16:41.331858 | orchestrator | 2026-01-30 04:16:41.331869 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-30 04:16:41.331881 | orchestrator | Friday 30 January 2026 04:15:33 +0000 (0:00:27.119) 0:07:20.298 ******** 2026-01-30 04:16:41.331945 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:16:41.331957 | orchestrator | 2026-01-30 04:16:41.331968 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-30 04:16:41.331979 | orchestrator | Friday 30 January 2026 04:15:33 +0000 (0:00:00.280) 0:07:20.579 ******** 2026-01-30 04:16:41.331990 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:16:41.332001 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:16:41.332012 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:41.332026 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.332039 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.332051 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-30 04:16:41.332066 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 04:16:41.332079 | orchestrator | 2026-01-30 04:16:41.332092 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-30 04:16:41.332104 | orchestrator | Friday 30 January 2026 04:15:55 +0000 (0:00:22.212) 0:07:42.791 ******** 2026-01-30 04:16:41.332116 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:41.332128 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:16:41.332140 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:16:41.332153 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.332188 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:16:41.332201 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.332213 | orchestrator | 2026-01-30 04:16:41.332226 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-30 04:16:41.332239 | orchestrator | Friday 30 January 2026 04:16:03 +0000 (0:00:07.946) 0:07:50.738 ******** 2026-01-30 04:16:41.332251 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:16:41.332265 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:16:41.332290 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.332304 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.332317 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:41.332329 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-01-30 04:16:41.332342 | orchestrator | 2026-01-30 04:16:41.332355 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-30 04:16:41.332368 | orchestrator | Friday 30 January 2026 04:16:07 +0000 (0:00:03.341) 0:07:54.079 ******** 2026-01-30 04:16:41.332380 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 04:16:41.332391 | orchestrator | 2026-01-30 04:16:41.332402 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-30 04:16:41.332413 | orchestrator | Friday 30 January 2026 04:16:20 +0000 (0:00:13.745) 0:08:07.825 ******** 2026-01-30 04:16:41.332424 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 04:16:41.332435 | orchestrator | 2026-01-30 04:16:41.332446 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-30 04:16:41.332457 | orchestrator | Friday 30 January 2026 04:16:22 +0000 (0:00:01.502) 0:08:09.327 ******** 2026-01-30 04:16:41.332558 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:16:41.332571 | orchestrator | 2026-01-30 04:16:41.332584 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-30 04:16:41.332602 | orchestrator | Friday 30 January 2026 04:16:23 +0000 (0:00:01.447) 0:08:10.775 ******** 2026-01-30 04:16:41.332617 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 04:16:41.332628 | orchestrator | 2026-01-30 04:16:41.332639 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-30 04:16:41.332650 | orchestrator | Friday 30 January 2026 04:16:36 +0000 (0:00:12.357) 0:08:23.133 ******** 2026-01-30 04:16:41.332661 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:16:41.332672 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:16:41.332683 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:16:41.332694 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:41.332705 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:41.332715 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:41.332726 | orchestrator | 2026-01-30 04:16:41.332737 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-30 04:16:41.332748 | orchestrator | 2026-01-30 04:16:41.332779 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-30 04:16:41.332791 | orchestrator | Friday 30 January 2026 04:16:37 +0000 (0:00:01.708) 0:08:24.841 ******** 2026-01-30 04:16:41.332802 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:16:41.332813 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:16:41.332823 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:16:41.332834 | orchestrator | 2026-01-30 04:16:41.332845 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-30 04:16:41.332856 | orchestrator | 2026-01-30 04:16:41.332867 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-30 04:16:41.332878 | orchestrator | Friday 30 January 2026 04:16:38 +0000 (0:00:00.934) 0:08:25.775 ******** 2026-01-30 04:16:41.332928 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.332940 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.332951 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:41.332962 | orchestrator | 2026-01-30 04:16:41.332973 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-30 04:16:41.332994 | orchestrator | 2026-01-30 04:16:41.333006 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-30 04:16:41.333017 | orchestrator | Friday 30 January 2026 04:16:39 +0000 (0:00:00.645) 0:08:26.421 ******** 2026-01-30 04:16:41.333028 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-30 04:16:41.333039 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-30 04:16:41.333050 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333061 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-30 04:16:41.333072 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-30 04:16:41.333083 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333094 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:16:41.333105 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-30 04:16:41.333116 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-30 04:16:41.333127 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333138 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-30 04:16:41.333148 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-30 04:16:41.333159 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333170 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:16:41.333181 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-30 04:16:41.333192 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-30 04:16:41.333203 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333214 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-30 04:16:41.333225 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-30 04:16:41.333236 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333247 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:16:41.333258 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-30 04:16:41.333269 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-30 04:16:41.333279 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333290 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-30 04:16:41.333301 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-30 04:16:41.333312 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333331 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.333342 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-30 04:16:41.333353 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-30 04:16:41.333364 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333374 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-30 04:16:41.333385 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-30 04:16:41.333396 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333407 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.333418 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-30 04:16:41.333429 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-30 04:16:41.333439 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-30 04:16:41.333450 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-30 04:16:41.333461 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-30 04:16:41.333472 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-30 04:16:41.333482 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:41.333500 | orchestrator | 2026-01-30 04:16:41.333511 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-30 04:16:41.333522 | orchestrator | 2026-01-30 04:16:41.333533 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-30 04:16:41.333544 | orchestrator | Friday 30 January 2026 04:16:40 +0000 (0:00:01.278) 0:08:27.699 ******** 2026-01-30 04:16:41.333554 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-30 04:16:41.333565 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-30 04:16:41.333576 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:41.333587 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-30 04:16:41.333598 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-30 04:16:41.333608 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:41.333619 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-30 04:16:41.333637 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-30 04:16:42.946343 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:42.946444 | orchestrator | 2026-01-30 04:16:42.946459 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-30 04:16:42.946472 | orchestrator | 2026-01-30 04:16:42.946482 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-30 04:16:42.946493 | orchestrator | Friday 30 January 2026 04:16:41 +0000 (0:00:00.529) 0:08:28.229 ******** 2026-01-30 04:16:42.946504 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:42.946515 | orchestrator | 2026-01-30 04:16:42.946525 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-30 04:16:42.946536 | orchestrator | 2026-01-30 04:16:42.946546 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-30 04:16:42.946556 | orchestrator | Friday 30 January 2026 04:16:41 +0000 (0:00:00.666) 0:08:28.896 ******** 2026-01-30 04:16:42.946566 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:42.946576 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:42.946586 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:42.946595 | orchestrator | 2026-01-30 04:16:42.946606 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:16:42.946617 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:16:42.946631 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-30 04:16:42.946642 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-30 04:16:42.946652 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-30 04:16:42.946662 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-30 04:16:42.946673 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-01-30 04:16:42.946682 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-30 04:16:42.946688 | orchestrator | 2026-01-30 04:16:42.946694 | orchestrator | 2026-01-30 04:16:42.946701 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:16:42.946707 | orchestrator | Friday 30 January 2026 04:16:42 +0000 (0:00:00.618) 0:08:29.515 ******** 2026-01-30 04:16:42.946713 | orchestrator | =============================================================================== 2026-01-30 04:16:42.946719 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.29s 2026-01-30 04:16:42.946748 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.47s 2026-01-30 04:16:42.946755 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 27.12s 2026-01-30 04:16:42.946761 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.52s 2026-01-30 04:16:42.946780 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.21s 2026-01-30 04:16:42.946787 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.63s 2026-01-30 04:16:42.946793 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.18s 2026-01-30 04:16:42.946799 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.89s 2026-01-30 04:16:42.946805 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.85s 2026-01-30 04:16:42.946812 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.62s 2026-01-30 04:16:42.946818 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.75s 2026-01-30 04:16:42.946824 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.95s 2026-01-30 04:16:42.946830 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.90s 2026-01-30 04:16:42.946836 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.70s 2026-01-30 04:16:42.946843 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.36s 2026-01-30 04:16:42.946849 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.63s 2026-01-30 04:16:42.946855 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.33s 2026-01-30 04:16:42.946861 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.00s 2026-01-30 04:16:42.946868 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.95s 2026-01-30 04:16:42.946874 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.79s 2026-01-30 04:16:45.171964 | orchestrator | 2026-01-30 04:16:45 | INFO  | Task 42086f34-3ece-4d68-9d81-d2fe646494bf (horizon) was prepared for execution. 2026-01-30 04:16:45.172091 | orchestrator | 2026-01-30 04:16:45 | INFO  | It takes a moment until task 42086f34-3ece-4d68-9d81-d2fe646494bf (horizon) has been started and output is visible here. 2026-01-30 04:16:51.193080 | orchestrator | 2026-01-30 04:16:51.193185 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:16:51.193208 | orchestrator | 2026-01-30 04:16:51.193225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:16:51.193243 | orchestrator | Friday 30 January 2026 04:16:48 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-01-30 04:16:51.193259 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:51.193276 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:51.193294 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:51.193310 | orchestrator | 2026-01-30 04:16:51.193329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:16:51.193347 | orchestrator | Friday 30 January 2026 04:16:49 +0000 (0:00:00.222) 0:00:00.409 ******** 2026-01-30 04:16:51.193363 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-30 04:16:51.193383 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-30 04:16:51.193401 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-30 04:16:51.193420 | orchestrator | 2026-01-30 04:16:51.193437 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-30 04:16:51.193456 | orchestrator | 2026-01-30 04:16:51.193474 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-30 04:16:51.193490 | orchestrator | Friday 30 January 2026 04:16:49 +0000 (0:00:00.311) 0:00:00.721 ******** 2026-01-30 04:16:51.193509 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:16:51.193555 | orchestrator | 2026-01-30 04:16:51.193574 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-30 04:16:51.193591 | orchestrator | Friday 30 January 2026 04:16:49 +0000 (0:00:00.379) 0:00:01.100 ******** 2026-01-30 04:16:51.193639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:16:51.193692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:16:51.193738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:16:51.193757 | orchestrator | 2026-01-30 04:16:51.193775 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-30 04:16:51.193792 | orchestrator | Friday 30 January 2026 04:16:50 +0000 (0:00:01.000) 0:00:02.101 ******** 2026-01-30 04:16:51.193809 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:51.193826 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:51.193842 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:51.193859 | orchestrator | 2026-01-30 04:16:51.193875 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-30 04:16:51.193930 | orchestrator | Friday 30 January 2026 04:16:51 +0000 (0:00:00.350) 0:00:02.451 ******** 2026-01-30 04:16:51.193958 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-30 04:16:56.239741 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-30 04:16:56.239840 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-30 04:16:56.239855 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-30 04:16:56.239867 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-30 04:16:56.239878 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-30 04:16:56.239981 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-30 04:16:56.239995 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-30 04:16:56.240006 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-30 04:16:56.240017 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-30 04:16:56.240028 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-30 04:16:56.240039 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-30 04:16:56.240050 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-30 04:16:56.240061 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-30 04:16:56.240071 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-30 04:16:56.240082 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-30 04:16:56.240093 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-30 04:16:56.240103 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-30 04:16:56.240114 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-30 04:16:56.240125 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-30 04:16:56.240135 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-30 04:16:56.240146 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-30 04:16:56.240168 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-30 04:16:56.240180 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-30 04:16:56.240192 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-30 04:16:56.240205 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-30 04:16:56.240215 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-30 04:16:56.240241 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-30 04:16:56.240253 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-30 04:16:56.240264 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-30 04:16:56.240278 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-30 04:16:56.240291 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-30 04:16:56.240303 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-30 04:16:56.240316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-30 04:16:56.240328 | orchestrator | 2026-01-30 04:16:56.240348 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.240360 | orchestrator | Friday 30 January 2026 04:16:51 +0000 (0:00:00.649) 0:00:03.100 ******** 2026-01-30 04:16:56.240370 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.240382 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.240393 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.240403 | orchestrator | 2026-01-30 04:16:56.240414 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.240425 | orchestrator | Friday 30 January 2026 04:16:52 +0000 (0:00:00.267) 0:00:03.367 ******** 2026-01-30 04:16:56.240436 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240447 | orchestrator | 2026-01-30 04:16:56.240475 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.240486 | orchestrator | Friday 30 January 2026 04:16:52 +0000 (0:00:00.208) 0:00:03.576 ******** 2026-01-30 04:16:56.240497 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240508 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.240518 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.240529 | orchestrator | 2026-01-30 04:16:56.240540 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.240551 | orchestrator | Friday 30 January 2026 04:16:52 +0000 (0:00:00.248) 0:00:03.825 ******** 2026-01-30 04:16:56.240562 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.240572 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.240583 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.240594 | orchestrator | 2026-01-30 04:16:56.240604 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.240615 | orchestrator | Friday 30 January 2026 04:16:52 +0000 (0:00:00.276) 0:00:04.101 ******** 2026-01-30 04:16:56.240626 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240636 | orchestrator | 2026-01-30 04:16:56.240647 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.240659 | orchestrator | Friday 30 January 2026 04:16:52 +0000 (0:00:00.099) 0:00:04.200 ******** 2026-01-30 04:16:56.240669 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240680 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.240691 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.240702 | orchestrator | 2026-01-30 04:16:56.240713 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.240724 | orchestrator | Friday 30 January 2026 04:16:53 +0000 (0:00:00.271) 0:00:04.471 ******** 2026-01-30 04:16:56.240734 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.240745 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.240756 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.240766 | orchestrator | 2026-01-30 04:16:56.240777 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.240788 | orchestrator | Friday 30 January 2026 04:16:53 +0000 (0:00:00.385) 0:00:04.857 ******** 2026-01-30 04:16:56.240799 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240809 | orchestrator | 2026-01-30 04:16:56.240820 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.240831 | orchestrator | Friday 30 January 2026 04:16:53 +0000 (0:00:00.119) 0:00:04.977 ******** 2026-01-30 04:16:56.240841 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.240852 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.240863 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.240873 | orchestrator | 2026-01-30 04:16:56.240885 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.240917 | orchestrator | Friday 30 January 2026 04:16:53 +0000 (0:00:00.245) 0:00:05.223 ******** 2026-01-30 04:16:56.240928 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.240939 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.240950 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.240961 | orchestrator | 2026-01-30 04:16:56.240971 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.240989 | orchestrator | Friday 30 January 2026 04:16:54 +0000 (0:00:00.279) 0:00:05.502 ******** 2026-01-30 04:16:56.241000 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241011 | orchestrator | 2026-01-30 04:16:56.241021 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.241032 | orchestrator | Friday 30 January 2026 04:16:54 +0000 (0:00:00.124) 0:00:05.627 ******** 2026-01-30 04:16:56.241043 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241054 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.241064 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.241075 | orchestrator | 2026-01-30 04:16:56.241091 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.241102 | orchestrator | Friday 30 January 2026 04:16:54 +0000 (0:00:00.429) 0:00:06.057 ******** 2026-01-30 04:16:56.241113 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.241123 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.241134 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.241145 | orchestrator | 2026-01-30 04:16:56.241156 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.241167 | orchestrator | Friday 30 January 2026 04:16:54 +0000 (0:00:00.288) 0:00:06.345 ******** 2026-01-30 04:16:56.241177 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241188 | orchestrator | 2026-01-30 04:16:56.241199 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.241209 | orchestrator | Friday 30 January 2026 04:16:55 +0000 (0:00:00.115) 0:00:06.461 ******** 2026-01-30 04:16:56.241220 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241231 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.241242 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.241252 | orchestrator | 2026-01-30 04:16:56.241263 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.241274 | orchestrator | Friday 30 January 2026 04:16:55 +0000 (0:00:00.274) 0:00:06.736 ******** 2026-01-30 04:16:56.241285 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:16:56.241295 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:16:56.241306 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:16:56.241317 | orchestrator | 2026-01-30 04:16:56.241327 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:16:56.241338 | orchestrator | Friday 30 January 2026 04:16:55 +0000 (0:00:00.292) 0:00:07.029 ******** 2026-01-30 04:16:56.241349 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241359 | orchestrator | 2026-01-30 04:16:56.241370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:16:56.241381 | orchestrator | Friday 30 January 2026 04:16:55 +0000 (0:00:00.278) 0:00:07.308 ******** 2026-01-30 04:16:56.241391 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:16:56.241402 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:16:56.241413 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:16:56.241423 | orchestrator | 2026-01-30 04:16:56.241434 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:16:56.241453 | orchestrator | Friday 30 January 2026 04:16:56 +0000 (0:00:00.284) 0:00:07.592 ******** 2026-01-30 04:17:09.260621 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:17:09.260732 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:17:09.260746 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:17:09.260774 | orchestrator | 2026-01-30 04:17:09.260786 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:17:09.260798 | orchestrator | Friday 30 January 2026 04:16:56 +0000 (0:00:00.309) 0:00:07.901 ******** 2026-01-30 04:17:09.260808 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.260820 | orchestrator | 2026-01-30 04:17:09.260831 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:17:09.260841 | orchestrator | Friday 30 January 2026 04:16:56 +0000 (0:00:00.127) 0:00:08.029 ******** 2026-01-30 04:17:09.260878 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.260890 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.260964 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.260977 | orchestrator | 2026-01-30 04:17:09.260988 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:17:09.260998 | orchestrator | Friday 30 January 2026 04:16:56 +0000 (0:00:00.297) 0:00:08.327 ******** 2026-01-30 04:17:09.261009 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:17:09.261019 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:17:09.261030 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:17:09.261037 | orchestrator | 2026-01-30 04:17:09.261044 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:17:09.261050 | orchestrator | Friday 30 January 2026 04:16:57 +0000 (0:00:00.471) 0:00:08.798 ******** 2026-01-30 04:17:09.261057 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261063 | orchestrator | 2026-01-30 04:17:09.261069 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:17:09.261075 | orchestrator | Friday 30 January 2026 04:16:57 +0000 (0:00:00.131) 0:00:08.930 ******** 2026-01-30 04:17:09.261082 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261088 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.261094 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.261101 | orchestrator | 2026-01-30 04:17:09.261111 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:17:09.261122 | orchestrator | Friday 30 January 2026 04:16:57 +0000 (0:00:00.272) 0:00:09.203 ******** 2026-01-30 04:17:09.261133 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:17:09.261144 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:17:09.261155 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:17:09.261166 | orchestrator | 2026-01-30 04:17:09.261177 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:17:09.261187 | orchestrator | Friday 30 January 2026 04:16:58 +0000 (0:00:00.323) 0:00:09.526 ******** 2026-01-30 04:17:09.261197 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261208 | orchestrator | 2026-01-30 04:17:09.261215 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:17:09.261223 | orchestrator | Friday 30 January 2026 04:16:58 +0000 (0:00:00.136) 0:00:09.663 ******** 2026-01-30 04:17:09.261230 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261237 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.261245 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.261251 | orchestrator | 2026-01-30 04:17:09.261259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-30 04:17:09.261266 | orchestrator | Friday 30 January 2026 04:16:58 +0000 (0:00:00.446) 0:00:10.109 ******** 2026-01-30 04:17:09.261273 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:17:09.261280 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:17:09.261287 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:17:09.261294 | orchestrator | 2026-01-30 04:17:09.261301 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-30 04:17:09.261309 | orchestrator | Friday 30 January 2026 04:16:59 +0000 (0:00:00.303) 0:00:10.412 ******** 2026-01-30 04:17:09.261332 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261344 | orchestrator | 2026-01-30 04:17:09.261354 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-30 04:17:09.261366 | orchestrator | Friday 30 January 2026 04:16:59 +0000 (0:00:00.124) 0:00:10.537 ******** 2026-01-30 04:17:09.261376 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261387 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.261398 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.261409 | orchestrator | 2026-01-30 04:17:09.261417 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-30 04:17:09.261423 | orchestrator | Friday 30 January 2026 04:16:59 +0000 (0:00:00.267) 0:00:10.804 ******** 2026-01-30 04:17:09.261430 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:17:09.261444 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:17:09.261450 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:17:09.261456 | orchestrator | 2026-01-30 04:17:09.261463 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-30 04:17:09.261469 | orchestrator | Friday 30 January 2026 04:17:01 +0000 (0:00:01.624) 0:00:12.429 ******** 2026-01-30 04:17:09.261476 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-30 04:17:09.261483 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-30 04:17:09.261489 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-30 04:17:09.261495 | orchestrator | 2026-01-30 04:17:09.261501 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-30 04:17:09.261507 | orchestrator | Friday 30 January 2026 04:17:02 +0000 (0:00:01.793) 0:00:14.222 ******** 2026-01-30 04:17:09.261514 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-30 04:17:09.261522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-30 04:17:09.261528 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-30 04:17:09.261534 | orchestrator | 2026-01-30 04:17:09.261540 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-30 04:17:09.261562 | orchestrator | Friday 30 January 2026 04:17:04 +0000 (0:00:01.815) 0:00:16.038 ******** 2026-01-30 04:17:09.261569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-30 04:17:09.261575 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-30 04:17:09.261581 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-30 04:17:09.261587 | orchestrator | 2026-01-30 04:17:09.261594 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-30 04:17:09.261600 | orchestrator | Friday 30 January 2026 04:17:06 +0000 (0:00:01.510) 0:00:17.549 ******** 2026-01-30 04:17:09.261606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261613 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.261619 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.261625 | orchestrator | 2026-01-30 04:17:09.261631 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-30 04:17:09.261638 | orchestrator | Friday 30 January 2026 04:17:06 +0000 (0:00:00.287) 0:00:17.836 ******** 2026-01-30 04:17:09.261644 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:09.261650 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:09.261657 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:09.261663 | orchestrator | 2026-01-30 04:17:09.261669 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-30 04:17:09.261676 | orchestrator | Friday 30 January 2026 04:17:06 +0000 (0:00:00.471) 0:00:18.307 ******** 2026-01-30 04:17:09.261682 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:17:09.261688 | orchestrator | 2026-01-30 04:17:09.261695 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-30 04:17:09.261701 | orchestrator | Friday 30 January 2026 04:17:07 +0000 (0:00:00.581) 0:00:18.889 ******** 2026-01-30 04:17:09.261717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:09.261740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:10.072090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:10.072200 | orchestrator | 2026-01-30 04:17:10.072214 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-30 04:17:10.072225 | orchestrator | Friday 30 January 2026 04:17:09 +0000 (0:00:01.718) 0:00:20.607 ******** 2026-01-30 04:17:10.072253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:10.072271 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:10.072287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:10.072297 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:10.072315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:12.262300 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:12.262439 | orchestrator | 2026-01-30 04:17:12.262478 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-30 04:17:12.262496 | orchestrator | Friday 30 January 2026 04:17:10 +0000 (0:00:00.815) 0:00:21.423 ******** 2026-01-30 04:17:12.262526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:12.262545 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:12.262581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:12.262620 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:12.262635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 04:17:12.262650 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:12.262663 | orchestrator | 2026-01-30 04:17:12.262721 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-30 04:17:12.262736 | orchestrator | Friday 30 January 2026 04:17:10 +0000 (0:00:00.848) 0:00:22.271 ******** 2026-01-30 04:17:12.262785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:59.884276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:59.884459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 04:17:59.884480 | orchestrator | 2026-01-30 04:17:59.884494 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-30 04:17:59.884507 | orchestrator | Friday 30 January 2026 04:17:12 +0000 (0:00:01.347) 0:00:23.618 ******** 2026-01-30 04:17:59.884518 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:17:59.884530 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:17:59.884541 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:17:59.884552 | orchestrator | 2026-01-30 04:17:59.884563 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-30 04:17:59.884575 | orchestrator | Friday 30 January 2026 04:17:12 +0000 (0:00:00.483) 0:00:24.102 ******** 2026-01-30 04:17:59.884586 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:17:59.884597 | orchestrator | 2026-01-30 04:17:59.884608 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-30 04:17:59.884619 | orchestrator | Friday 30 January 2026 04:17:13 +0000 (0:00:00.515) 0:00:24.618 ******** 2026-01-30 04:17:59.884630 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:17:59.884641 | orchestrator | 2026-01-30 04:17:59.884653 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-30 04:17:59.884663 | orchestrator | Friday 30 January 2026 04:17:15 +0000 (0:00:02.400) 0:00:27.018 ******** 2026-01-30 04:17:59.884674 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:17:59.884686 | orchestrator | 2026-01-30 04:17:59.884697 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-30 04:17:59.884718 | orchestrator | Friday 30 January 2026 04:17:17 +0000 (0:00:02.266) 0:00:29.284 ******** 2026-01-30 04:17:59.884729 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:17:59.884740 | orchestrator | 2026-01-30 04:17:59.884751 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-30 04:17:59.884762 | orchestrator | Friday 30 January 2026 04:17:35 +0000 (0:00:17.534) 0:00:46.819 ******** 2026-01-30 04:17:59.884773 | orchestrator | 2026-01-30 04:17:59.884784 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-30 04:17:59.884795 | orchestrator | Friday 30 January 2026 04:17:35 +0000 (0:00:00.213) 0:00:47.033 ******** 2026-01-30 04:17:59.884808 | orchestrator | 2026-01-30 04:17:59.884821 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-30 04:17:59.884833 | orchestrator | Friday 30 January 2026 04:17:35 +0000 (0:00:00.066) 0:00:47.099 ******** 2026-01-30 04:17:59.884845 | orchestrator | 2026-01-30 04:17:59.884858 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-30 04:17:59.884870 | orchestrator | Friday 30 January 2026 04:17:35 +0000 (0:00:00.069) 0:00:47.168 ******** 2026-01-30 04:17:59.884882 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:17:59.884894 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:17:59.884907 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:17:59.884968 | orchestrator | 2026-01-30 04:17:59.884991 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:17:59.885014 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 04:17:59.885035 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-30 04:17:59.885054 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-30 04:17:59.885069 | orchestrator | 2026-01-30 04:17:59.885082 | orchestrator | 2026-01-30 04:17:59.885094 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:17:59.885107 | orchestrator | Friday 30 January 2026 04:17:59 +0000 (0:00:24.052) 0:01:11.221 ******** 2026-01-30 04:17:59.885120 | orchestrator | =============================================================================== 2026-01-30 04:17:59.885132 | orchestrator | horizon : Restart horizon container ------------------------------------ 24.05s 2026-01-30 04:17:59.885144 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.53s 2026-01-30 04:17:59.885157 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.40s 2026-01-30 04:17:59.885176 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.27s 2026-01-30 04:17:59.885188 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.82s 2026-01-30 04:17:59.885199 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.79s 2026-01-30 04:17:59.885209 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.72s 2026-01-30 04:17:59.885220 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.62s 2026-01-30 04:17:59.885231 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2026-01-30 04:17:59.885242 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.35s 2026-01-30 04:17:59.885253 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.00s 2026-01-30 04:17:59.885264 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2026-01-30 04:17:59.885275 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.82s 2026-01-30 04:17:59.885295 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-01-30 04:18:00.209269 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2026-01-30 04:18:00.209356 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-30 04:18:00.209363 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.48s 2026-01-30 04:18:00.209367 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-01-30 04:18:00.209372 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.47s 2026-01-30 04:18:00.209376 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.45s 2026-01-30 04:18:02.417155 | orchestrator | 2026-01-30 04:18:02 | INFO  | Task f56be14f-278e-4d85-945f-fe16ec71cdd1 (skyline) was prepared for execution. 2026-01-30 04:18:02.417266 | orchestrator | 2026-01-30 04:18:02 | INFO  | It takes a moment until task f56be14f-278e-4d85-945f-fe16ec71cdd1 (skyline) has been started and output is visible here. 2026-01-30 04:18:34.022254 | orchestrator | 2026-01-30 04:18:34.022385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:18:34.022406 | orchestrator | 2026-01-30 04:18:34.022421 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:18:34.022435 | orchestrator | Friday 30 January 2026 04:18:06 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-30 04:18:34.022448 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:18:34.022462 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:18:34.022475 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:18:34.022488 | orchestrator | 2026-01-30 04:18:34.022501 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:18:34.022510 | orchestrator | Friday 30 January 2026 04:18:06 +0000 (0:00:00.302) 0:00:00.549 ******** 2026-01-30 04:18:34.022519 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-01-30 04:18:34.022528 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-01-30 04:18:34.022536 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-01-30 04:18:34.022544 | orchestrator | 2026-01-30 04:18:34.022552 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-01-30 04:18:34.022566 | orchestrator | 2026-01-30 04:18:34.022579 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-01-30 04:18:34.022595 | orchestrator | Friday 30 January 2026 04:18:07 +0000 (0:00:00.406) 0:00:00.955 ******** 2026-01-30 04:18:34.022609 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:18:34.022622 | orchestrator | 2026-01-30 04:18:34.022635 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-01-30 04:18:34.022648 | orchestrator | Friday 30 January 2026 04:18:07 +0000 (0:00:00.519) 0:00:01.475 ******** 2026-01-30 04:18:34.022661 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-01-30 04:18:34.022674 | orchestrator | 2026-01-30 04:18:34.022688 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-01-30 04:18:34.022701 | orchestrator | Friday 30 January 2026 04:18:11 +0000 (0:00:03.424) 0:00:04.899 ******** 2026-01-30 04:18:34.022715 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-01-30 04:18:34.022730 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-01-30 04:18:34.022744 | orchestrator | 2026-01-30 04:18:34.022757 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-01-30 04:18:34.022769 | orchestrator | Friday 30 January 2026 04:18:17 +0000 (0:00:06.622) 0:00:11.522 ******** 2026-01-30 04:18:34.022778 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:18:34.022787 | orchestrator | 2026-01-30 04:18:34.022796 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-01-30 04:18:34.022804 | orchestrator | Friday 30 January 2026 04:18:20 +0000 (0:00:03.307) 0:00:14.830 ******** 2026-01-30 04:18:34.022840 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:18:34.022850 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-01-30 04:18:34.022858 | orchestrator | 2026-01-30 04:18:34.022866 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-01-30 04:18:34.022874 | orchestrator | Friday 30 January 2026 04:18:25 +0000 (0:00:04.253) 0:00:19.083 ******** 2026-01-30 04:18:34.022883 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:18:34.022891 | orchestrator | 2026-01-30 04:18:34.022912 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-01-30 04:18:34.022920 | orchestrator | Friday 30 January 2026 04:18:28 +0000 (0:00:03.409) 0:00:22.493 ******** 2026-01-30 04:18:34.022928 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-01-30 04:18:34.022959 | orchestrator | 2026-01-30 04:18:34.022973 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-01-30 04:18:34.022982 | orchestrator | Friday 30 January 2026 04:18:32 +0000 (0:00:04.105) 0:00:26.598 ******** 2026-01-30 04:18:34.022993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:34.023026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:34.023036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:34.023056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:34.023081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:34.023105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782544 | orchestrator | 2026-01-30 04:18:37.782643 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-01-30 04:18:37.782660 | orchestrator | Friday 30 January 2026 04:18:34 +0000 (0:00:01.284) 0:00:27.883 ******** 2026-01-30 04:18:37.782672 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:18:37.782683 | orchestrator | 2026-01-30 04:18:37.782693 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-01-30 04:18:37.782704 | orchestrator | Friday 30 January 2026 04:18:34 +0000 (0:00:00.667) 0:00:28.551 ******** 2026-01-30 04:18:37.782716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:37.782985 | orchestrator | 2026-01-30 04:18:37.783004 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-01-30 04:18:37.783018 | orchestrator | Friday 30 January 2026 04:18:37 +0000 (0:00:02.510) 0:00:31.061 ******** 2026-01-30 04:18:37.783036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:37.783047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:37.783058 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:18:37.783084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970214 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:18:38.970260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970287 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:18:38.970299 | orchestrator | 2026-01-30 04:18:38.970312 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-01-30 04:18:38.970325 | orchestrator | Friday 30 January 2026 04:18:37 +0000 (0:00:00.592) 0:00:31.653 ******** 2026-01-30 04:18:38.970336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970399 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:18:38.970417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970440 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:18:38.970451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-30 04:18:38.970478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-30 04:18:47.076911 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:18:47.077080 | orchestrator | 2026-01-30 04:18:47.077090 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-01-30 04:18:47.077097 | orchestrator | Friday 30 January 2026 04:18:38 +0000 (0:00:01.184) 0:00:32.838 ******** 2026-01-30 04:18:47.077116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077185 | orchestrator | 2026-01-30 04:18:47.077191 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-01-30 04:18:47.077196 | orchestrator | Friday 30 January 2026 04:18:41 +0000 (0:00:02.344) 0:00:35.183 ******** 2026-01-30 04:18:47.077201 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-30 04:18:47.077207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-30 04:18:47.077212 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-01-30 04:18:47.077217 | orchestrator | 2026-01-30 04:18:47.077222 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-01-30 04:18:47.077227 | orchestrator | Friday 30 January 2026 04:18:42 +0000 (0:00:01.523) 0:00:36.706 ******** 2026-01-30 04:18:47.077237 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-30 04:18:47.077243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-30 04:18:47.077248 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-01-30 04:18:47.077253 | orchestrator | 2026-01-30 04:18:47.077258 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-01-30 04:18:47.077263 | orchestrator | Friday 30 January 2026 04:18:44 +0000 (0:00:01.947) 0:00:38.654 ******** 2026-01-30 04:18:47.077268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:47.077279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112827 | orchestrator | 2026-01-30 04:18:49.112836 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-01-30 04:18:49.112844 | orchestrator | Friday 30 January 2026 04:18:47 +0000 (0:00:02.295) 0:00:40.949 ******** 2026-01-30 04:18:49.112852 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:18:49.112860 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:18:49.112866 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:18:49.112874 | orchestrator | 2026-01-30 04:18:49.112892 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-01-30 04:18:49.112899 | orchestrator | Friday 30 January 2026 04:18:47 +0000 (0:00:00.277) 0:00:41.227 ******** 2026-01-30 04:18:49.112911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.112981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:18:49.113001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:19:22.760165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-30 04:19:22.760288 | orchestrator | 2026-01-30 04:19:22.760302 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-01-30 04:19:22.760314 | orchestrator | Friday 30 January 2026 04:18:49 +0000 (0:00:01.755) 0:00:42.983 ******** 2026-01-30 04:19:22.760323 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:19:22.760334 | orchestrator | 2026-01-30 04:19:22.760343 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-01-30 04:19:22.760352 | orchestrator | Friday 30 January 2026 04:18:51 +0000 (0:00:02.171) 0:00:45.155 ******** 2026-01-30 04:19:22.760361 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:19:22.760370 | orchestrator | 2026-01-30 04:19:22.760379 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-01-30 04:19:22.760388 | orchestrator | Friday 30 January 2026 04:18:53 +0000 (0:00:02.351) 0:00:47.506 ******** 2026-01-30 04:19:22.760397 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:19:22.760407 | orchestrator | 2026-01-30 04:19:22.760416 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-30 04:19:22.760425 | orchestrator | Friday 30 January 2026 04:19:01 +0000 (0:00:08.254) 0:00:55.761 ******** 2026-01-30 04:19:22.760434 | orchestrator | 2026-01-30 04:19:22.760443 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-30 04:19:22.760451 | orchestrator | Friday 30 January 2026 04:19:01 +0000 (0:00:00.065) 0:00:55.826 ******** 2026-01-30 04:19:22.760460 | orchestrator | 2026-01-30 04:19:22.760469 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-01-30 04:19:22.760478 | orchestrator | Friday 30 January 2026 04:19:02 +0000 (0:00:00.065) 0:00:55.892 ******** 2026-01-30 04:19:22.760487 | orchestrator | 2026-01-30 04:19:22.760496 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-01-30 04:19:22.760505 | orchestrator | Friday 30 January 2026 04:19:02 +0000 (0:00:00.070) 0:00:55.963 ******** 2026-01-30 04:19:22.760513 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:19:22.760522 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:19:22.760531 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:19:22.760540 | orchestrator | 2026-01-30 04:19:22.760549 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-01-30 04:19:22.760558 | orchestrator | Friday 30 January 2026 04:19:12 +0000 (0:00:10.872) 0:01:06.835 ******** 2026-01-30 04:19:22.760567 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:19:22.760576 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:19:22.760585 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:19:22.760594 | orchestrator | 2026-01-30 04:19:22.760603 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:19:22.760613 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 04:19:22.760624 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 04:19:22.760633 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 04:19:22.760642 | orchestrator | 2026-01-30 04:19:22.760651 | orchestrator | 2026-01-30 04:19:22.760666 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:19:22.760675 | orchestrator | Friday 30 January 2026 04:19:22 +0000 (0:00:09.427) 0:01:16.262 ******** 2026-01-30 04:19:22.760684 | orchestrator | =============================================================================== 2026-01-30 04:19:22.760693 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 10.87s 2026-01-30 04:19:22.760716 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.43s 2026-01-30 04:19:22.760727 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.25s 2026-01-30 04:19:22.760737 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.62s 2026-01-30 04:19:22.760748 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.25s 2026-01-30 04:19:22.760758 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 4.11s 2026-01-30 04:19:22.760769 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.42s 2026-01-30 04:19:22.760779 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.41s 2026-01-30 04:19:22.760806 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.31s 2026-01-30 04:19:22.760816 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.51s 2026-01-30 04:19:22.760825 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.35s 2026-01-30 04:19:22.760834 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.34s 2026-01-30 04:19:22.760843 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.30s 2026-01-30 04:19:22.760852 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.17s 2026-01-30 04:19:22.760861 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.95s 2026-01-30 04:19:22.760870 | orchestrator | skyline : Check skyline container --------------------------------------- 1.76s 2026-01-30 04:19:22.760880 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.52s 2026-01-30 04:19:22.760889 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.28s 2026-01-30 04:19:22.760898 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.18s 2026-01-30 04:19:22.760907 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.67s 2026-01-30 04:19:24.937108 | orchestrator | 2026-01-30 04:19:24 | INFO  | Task 33d72ee0-7b0a-4db2-89c2-dbe46280945b (glance) was prepared for execution. 2026-01-30 04:19:24.937188 | orchestrator | 2026-01-30 04:19:24 | INFO  | It takes a moment until task 33d72ee0-7b0a-4db2-89c2-dbe46280945b (glance) has been started and output is visible here. 2026-01-30 04:19:58.821524 | orchestrator | 2026-01-30 04:19:58.821657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:19:58.821675 | orchestrator | 2026-01-30 04:19:58.821698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:19:58.822517 | orchestrator | Friday 30 January 2026 04:19:28 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-01-30 04:19:58.822564 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:19:58.822587 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:19:58.822607 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:19:58.822628 | orchestrator | 2026-01-30 04:19:58.822651 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:19:58.822673 | orchestrator | Friday 30 January 2026 04:19:28 +0000 (0:00:00.215) 0:00:00.401 ******** 2026-01-30 04:19:58.822693 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-30 04:19:58.822714 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-30 04:19:58.822736 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-30 04:19:58.822755 | orchestrator | 2026-01-30 04:19:58.822766 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-30 04:19:58.822807 | orchestrator | 2026-01-30 04:19:58.822819 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-30 04:19:58.822830 | orchestrator | Friday 30 January 2026 04:19:29 +0000 (0:00:00.321) 0:00:00.723 ******** 2026-01-30 04:19:58.822842 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:19:58.822854 | orchestrator | 2026-01-30 04:19:58.822865 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-30 04:19:58.822876 | orchestrator | Friday 30 January 2026 04:19:29 +0000 (0:00:00.489) 0:00:01.212 ******** 2026-01-30 04:19:58.822887 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-30 04:19:58.822898 | orchestrator | 2026-01-30 04:19:58.822909 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-30 04:19:58.822927 | orchestrator | Friday 30 January 2026 04:19:33 +0000 (0:00:03.563) 0:00:04.776 ******** 2026-01-30 04:19:58.822946 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-30 04:19:58.822966 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-30 04:19:58.823009 | orchestrator | 2026-01-30 04:19:58.823022 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-30 04:19:58.823041 | orchestrator | Friday 30 January 2026 04:19:40 +0000 (0:00:06.704) 0:00:11.480 ******** 2026-01-30 04:19:58.823054 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:19:58.823066 | orchestrator | 2026-01-30 04:19:58.823077 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-30 04:19:58.823088 | orchestrator | Friday 30 January 2026 04:19:43 +0000 (0:00:03.364) 0:00:14.844 ******** 2026-01-30 04:19:58.823099 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:19:58.823110 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-30 04:19:58.823121 | orchestrator | 2026-01-30 04:19:58.823132 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-30 04:19:58.823142 | orchestrator | Friday 30 January 2026 04:19:47 +0000 (0:00:04.393) 0:00:19.238 ******** 2026-01-30 04:19:58.823169 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:19:58.823180 | orchestrator | 2026-01-30 04:19:58.823191 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-30 04:19:58.823202 | orchestrator | Friday 30 January 2026 04:19:51 +0000 (0:00:03.265) 0:00:22.503 ******** 2026-01-30 04:19:58.823213 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-30 04:19:58.823224 | orchestrator | 2026-01-30 04:19:58.823234 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-30 04:19:58.823245 | orchestrator | Friday 30 January 2026 04:19:55 +0000 (0:00:03.934) 0:00:26.438 ******** 2026-01-30 04:19:58.823291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:19:58.823326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:19:58.823345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:19:58.823357 | orchestrator | 2026-01-30 04:19:58.823381 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-30 04:19:58.823400 | orchestrator | Friday 30 January 2026 04:19:58 +0000 (0:00:03.106) 0:00:29.545 ******** 2026-01-30 04:19:58.823421 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:19:58.823435 | orchestrator | 2026-01-30 04:19:58.823454 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-30 04:20:12.714615 | orchestrator | Friday 30 January 2026 04:19:58 +0000 (0:00:00.689) 0:00:30.234 ******** 2026-01-30 04:20:12.714658 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:20:12.714664 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:20:12.714668 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:20:12.714672 | orchestrator | 2026-01-30 04:20:12.714676 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-30 04:20:12.714680 | orchestrator | Friday 30 January 2026 04:20:02 +0000 (0:00:03.231) 0:00:33.466 ******** 2026-01-30 04:20:12.714685 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714689 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714693 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714697 | orchestrator | 2026-01-30 04:20:12.714701 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-30 04:20:12.714705 | orchestrator | Friday 30 January 2026 04:20:03 +0000 (0:00:01.447) 0:00:34.913 ******** 2026-01-30 04:20:12.714708 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714712 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714716 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:20:12.714720 | orchestrator | 2026-01-30 04:20:12.714724 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-30 04:20:12.714728 | orchestrator | Friday 30 January 2026 04:20:04 +0000 (0:00:01.276) 0:00:36.190 ******** 2026-01-30 04:20:12.714732 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:20:12.714736 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:20:12.714740 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:20:12.714744 | orchestrator | 2026-01-30 04:20:12.714748 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-30 04:20:12.714751 | orchestrator | Friday 30 January 2026 04:20:05 +0000 (0:00:00.647) 0:00:36.837 ******** 2026-01-30 04:20:12.714755 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:12.714759 | orchestrator | 2026-01-30 04:20:12.714763 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-30 04:20:12.714767 | orchestrator | Friday 30 January 2026 04:20:05 +0000 (0:00:00.102) 0:00:36.940 ******** 2026-01-30 04:20:12.714771 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:12.714775 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:12.714779 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:12.714782 | orchestrator | 2026-01-30 04:20:12.714786 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-30 04:20:12.714790 | orchestrator | Friday 30 January 2026 04:20:05 +0000 (0:00:00.261) 0:00:37.202 ******** 2026-01-30 04:20:12.714794 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:20:12.714798 | orchestrator | 2026-01-30 04:20:12.714806 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-30 04:20:12.714809 | orchestrator | Friday 30 January 2026 04:20:06 +0000 (0:00:00.648) 0:00:37.850 ******** 2026-01-30 04:20:12.714816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:12.714837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:12.714845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:12.714852 | orchestrator | 2026-01-30 04:20:12.714856 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-30 04:20:12.714860 | orchestrator | Friday 30 January 2026 04:20:09 +0000 (0:00:03.522) 0:00:41.373 ******** 2026-01-30 04:20:12.714868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:16.055501 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:16.055587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:16.055611 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:16.055617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:16.055621 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:16.055625 | orchestrator | 2026-01-30 04:20:16.055630 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-30 04:20:16.055635 | orchestrator | Friday 30 January 2026 04:20:12 +0000 (0:00:02.758) 0:00:44.132 ******** 2026-01-30 04:20:16.055652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:16.055661 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:16.055665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:16.055670 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:16.055678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 04:20:44.111507 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.111683 | orchestrator | 2026-01-30 04:20:44.111704 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-30 04:20:44.111718 | orchestrator | Friday 30 January 2026 04:20:16 +0000 (0:00:03.337) 0:00:47.469 ******** 2026-01-30 04:20:44.111729 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.111740 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.111750 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.111761 | orchestrator | 2026-01-30 04:20:44.111772 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-30 04:20:44.111783 | orchestrator | Friday 30 January 2026 04:20:18 +0000 (0:00:02.829) 0:00:50.298 ******** 2026-01-30 04:20:44.111814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:44.111831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:44.111881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:20:44.111895 | orchestrator | 2026-01-30 04:20:44.111906 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-30 04:20:44.111917 | orchestrator | Friday 30 January 2026 04:20:22 +0000 (0:00:03.306) 0:00:53.605 ******** 2026-01-30 04:20:44.111928 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:20:44.111939 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:20:44.111950 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:20:44.111961 | orchestrator | 2026-01-30 04:20:44.111972 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-30 04:20:44.111983 | orchestrator | Friday 30 January 2026 04:20:26 +0000 (0:00:04.517) 0:00:58.122 ******** 2026-01-30 04:20:44.112031 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112044 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112056 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112068 | orchestrator | 2026-01-30 04:20:44.112080 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-30 04:20:44.112093 | orchestrator | Friday 30 January 2026 04:20:29 +0000 (0:00:02.758) 0:01:00.881 ******** 2026-01-30 04:20:44.112104 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112114 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112125 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112136 | orchestrator | 2026-01-30 04:20:44.112147 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-30 04:20:44.112158 | orchestrator | Friday 30 January 2026 04:20:32 +0000 (0:00:02.593) 0:01:03.474 ******** 2026-01-30 04:20:44.112169 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112188 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112215 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112234 | orchestrator | 2026-01-30 04:20:44.112251 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-30 04:20:44.112267 | orchestrator | Friday 30 January 2026 04:20:34 +0000 (0:00:02.765) 0:01:06.240 ******** 2026-01-30 04:20:44.112285 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112303 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112319 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112336 | orchestrator | 2026-01-30 04:20:44.112352 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-30 04:20:44.112379 | orchestrator | Friday 30 January 2026 04:20:37 +0000 (0:00:02.708) 0:01:08.949 ******** 2026-01-30 04:20:44.112396 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112412 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112427 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112444 | orchestrator | 2026-01-30 04:20:44.112461 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-30 04:20:44.112480 | orchestrator | Friday 30 January 2026 04:20:37 +0000 (0:00:00.380) 0:01:09.329 ******** 2026-01-30 04:20:44.112495 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-30 04:20:44.112513 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:20:44.112530 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-30 04:20:44.112547 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:20:44.112564 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-30 04:20:44.112581 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:20:44.112597 | orchestrator | 2026-01-30 04:20:44.112615 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-30 04:20:44.112632 | orchestrator | Friday 30 January 2026 04:20:40 +0000 (0:00:02.644) 0:01:11.974 ******** 2026-01-30 04:20:44.112649 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:20:44.112667 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:20:44.112685 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:20:44.112702 | orchestrator | 2026-01-30 04:20:44.112720 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-30 04:20:44.112752 | orchestrator | Friday 30 January 2026 04:20:44 +0000 (0:00:03.550) 0:01:15.524 ******** 2026-01-30 04:21:56.349932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:21:56.350293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:21:56.350414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 04:21:56.350444 | orchestrator | 2026-01-30 04:21:56.350464 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-30 04:21:56.350486 | orchestrator | Friday 30 January 2026 04:20:47 +0000 (0:00:03.181) 0:01:18.706 ******** 2026-01-30 04:21:56.350505 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:21:56.350526 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:21:56.350546 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:21:56.350561 | orchestrator | 2026-01-30 04:21:56.350574 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-30 04:21:56.350586 | orchestrator | Friday 30 January 2026 04:20:47 +0000 (0:00:00.357) 0:01:19.064 ******** 2026-01-30 04:21:56.350599 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350612 | orchestrator | 2026-01-30 04:21:56.350625 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-30 04:21:56.350638 | orchestrator | Friday 30 January 2026 04:20:49 +0000 (0:00:02.147) 0:01:21.211 ******** 2026-01-30 04:21:56.350650 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350674 | orchestrator | 2026-01-30 04:21:56.350688 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-30 04:21:56.350701 | orchestrator | Friday 30 January 2026 04:20:52 +0000 (0:00:02.226) 0:01:23.437 ******** 2026-01-30 04:21:56.350714 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350727 | orchestrator | 2026-01-30 04:21:56.350740 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-30 04:21:56.350753 | orchestrator | Friday 30 January 2026 04:20:54 +0000 (0:00:02.138) 0:01:25.575 ******** 2026-01-30 04:21:56.350765 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350777 | orchestrator | 2026-01-30 04:21:56.350788 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-30 04:21:56.350799 | orchestrator | Friday 30 January 2026 04:21:22 +0000 (0:00:28.206) 0:01:53.782 ******** 2026-01-30 04:21:56.350810 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350821 | orchestrator | 2026-01-30 04:21:56.350832 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-30 04:21:56.350842 | orchestrator | Friday 30 January 2026 04:21:24 +0000 (0:00:02.229) 0:01:56.011 ******** 2026-01-30 04:21:56.350853 | orchestrator | 2026-01-30 04:21:56.350864 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-30 04:21:56.350875 | orchestrator | Friday 30 January 2026 04:21:24 +0000 (0:00:00.087) 0:01:56.099 ******** 2026-01-30 04:21:56.350886 | orchestrator | 2026-01-30 04:21:56.350897 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-30 04:21:56.350908 | orchestrator | Friday 30 January 2026 04:21:24 +0000 (0:00:00.066) 0:01:56.165 ******** 2026-01-30 04:21:56.350918 | orchestrator | 2026-01-30 04:21:56.350929 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-30 04:21:56.350940 | orchestrator | Friday 30 January 2026 04:21:24 +0000 (0:00:00.067) 0:01:56.232 ******** 2026-01-30 04:21:56.350951 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:21:56.350962 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:21:56.350973 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:21:56.350984 | orchestrator | 2026-01-30 04:21:56.350994 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:21:56.351007 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:21:56.351047 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-30 04:21:56.351058 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-30 04:21:56.351069 | orchestrator | 2026-01-30 04:21:56.351080 | orchestrator | 2026-01-30 04:21:56.351091 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:21:56.351103 | orchestrator | Friday 30 January 2026 04:21:56 +0000 (0:00:31.516) 0:02:27.749 ******** 2026-01-30 04:21:56.351114 | orchestrator | =============================================================================== 2026-01-30 04:21:56.351125 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.52s 2026-01-30 04:21:56.351136 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.21s 2026-01-30 04:21:56.351147 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.70s 2026-01-30 04:21:56.351168 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.52s 2026-01-30 04:21:56.625513 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.39s 2026-01-30 04:21:56.625653 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.93s 2026-01-30 04:21:56.625675 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.56s 2026-01-30 04:21:56.625716 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.55s 2026-01-30 04:21:56.625773 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.52s 2026-01-30 04:21:56.625783 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.36s 2026-01-30 04:21:56.625792 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.34s 2026-01-30 04:21:56.625800 | orchestrator | glance : Copying over config.json files for services -------------------- 3.31s 2026-01-30 04:21:56.625809 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.27s 2026-01-30 04:21:56.625818 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.23s 2026-01-30 04:21:56.625827 | orchestrator | glance : Check glance containers ---------------------------------------- 3.18s 2026-01-30 04:21:56.625836 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.11s 2026-01-30 04:21:56.625844 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 2.83s 2026-01-30 04:21:56.625853 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 2.77s 2026-01-30 04:21:56.625862 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 2.76s 2026-01-30 04:21:56.625871 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.76s 2026-01-30 04:21:58.856867 | orchestrator | 2026-01-30 04:21:58 | INFO  | Task ef44364c-7e46-4059-9952-34a27df8c0a1 (cinder) was prepared for execution. 2026-01-30 04:21:58.856973 | orchestrator | 2026-01-30 04:21:58 | INFO  | It takes a moment until task ef44364c-7e46-4059-9952-34a27df8c0a1 (cinder) has been started and output is visible here. 2026-01-30 04:22:34.559082 | orchestrator | 2026-01-30 04:22:34.559190 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:22:34.559203 | orchestrator | 2026-01-30 04:22:34.559211 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:22:34.559219 | orchestrator | Friday 30 January 2026 04:22:02 +0000 (0:00:00.248) 0:00:00.248 ******** 2026-01-30 04:22:34.559226 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:22:34.559234 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:22:34.559242 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:22:34.559248 | orchestrator | 2026-01-30 04:22:34.559256 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:22:34.559268 | orchestrator | Friday 30 January 2026 04:22:03 +0000 (0:00:00.286) 0:00:00.535 ******** 2026-01-30 04:22:34.559279 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-30 04:22:34.559291 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-30 04:22:34.559303 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-30 04:22:34.559314 | orchestrator | 2026-01-30 04:22:34.559326 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-30 04:22:34.559338 | orchestrator | 2026-01-30 04:22:34.559345 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-30 04:22:34.559352 | orchestrator | Friday 30 January 2026 04:22:03 +0000 (0:00:00.404) 0:00:00.940 ******** 2026-01-30 04:22:34.559359 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:22:34.559366 | orchestrator | 2026-01-30 04:22:34.559373 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-30 04:22:34.559380 | orchestrator | Friday 30 January 2026 04:22:04 +0000 (0:00:00.498) 0:00:01.439 ******** 2026-01-30 04:22:34.559388 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-30 04:22:34.559395 | orchestrator | 2026-01-30 04:22:34.559402 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-30 04:22:34.559409 | orchestrator | Friday 30 January 2026 04:22:07 +0000 (0:00:03.507) 0:00:04.947 ******** 2026-01-30 04:22:34.559416 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-30 04:22:34.559443 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-30 04:22:34.559450 | orchestrator | 2026-01-30 04:22:34.559457 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-30 04:22:34.559464 | orchestrator | Friday 30 January 2026 04:22:14 +0000 (0:00:06.522) 0:00:11.470 ******** 2026-01-30 04:22:34.559470 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:22:34.559477 | orchestrator | 2026-01-30 04:22:34.559484 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-30 04:22:34.559491 | orchestrator | Friday 30 January 2026 04:22:17 +0000 (0:00:03.418) 0:00:14.888 ******** 2026-01-30 04:22:34.559497 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:22:34.559505 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-30 04:22:34.559512 | orchestrator | 2026-01-30 04:22:34.559518 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-30 04:22:34.559525 | orchestrator | Friday 30 January 2026 04:22:21 +0000 (0:00:04.205) 0:00:19.094 ******** 2026-01-30 04:22:34.559532 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:22:34.559538 | orchestrator | 2026-01-30 04:22:34.559545 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-30 04:22:34.559552 | orchestrator | Friday 30 January 2026 04:22:25 +0000 (0:00:03.348) 0:00:22.442 ******** 2026-01-30 04:22:34.559559 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-30 04:22:34.559565 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-30 04:22:34.559573 | orchestrator | 2026-01-30 04:22:34.559581 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-30 04:22:34.559599 | orchestrator | Friday 30 January 2026 04:22:32 +0000 (0:00:07.519) 0:00:29.962 ******** 2026-01-30 04:22:34.559610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:34.559636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:34.559645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:34.559662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:34.559671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:34.559683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:34.559692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:34.559706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:40.353314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:40.353407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:40.353417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:40.353439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:40.353447 | orchestrator | 2026-01-30 04:22:40.353455 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-30 04:22:40.353463 | orchestrator | Friday 30 January 2026 04:22:34 +0000 (0:00:02.021) 0:00:31.983 ******** 2026-01-30 04:22:40.353469 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:22:40.353477 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:22:40.353483 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:22:40.353489 | orchestrator | 2026-01-30 04:22:40.353496 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-30 04:22:40.353502 | orchestrator | Friday 30 January 2026 04:22:35 +0000 (0:00:00.522) 0:00:32.506 ******** 2026-01-30 04:22:40.353509 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:22:40.353516 | orchestrator | 2026-01-30 04:22:40.353523 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-30 04:22:40.353529 | orchestrator | Friday 30 January 2026 04:22:35 +0000 (0:00:00.536) 0:00:33.042 ******** 2026-01-30 04:22:40.353537 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-30 04:22:40.353562 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-30 04:22:40.353568 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-30 04:22:40.353575 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-30 04:22:40.353581 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-30 04:22:40.353588 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-30 04:22:40.353593 | orchestrator | 2026-01-30 04:22:40.353600 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-30 04:22:40.353607 | orchestrator | Friday 30 January 2026 04:22:37 +0000 (0:00:01.599) 0:00:34.642 ******** 2026-01-30 04:22:40.353631 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:40.353640 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:40.353651 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:40.353657 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:40.353677 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:50.714695 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-30 04:22:50.714813 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.714848 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.714866 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.714913 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.714959 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.714981 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-30 04:22:50.715002 | orchestrator | 2026-01-30 04:22:50.715025 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-30 04:22:50.715078 | orchestrator | Friday 30 January 2026 04:22:40 +0000 (0:00:03.309) 0:00:37.951 ******** 2026-01-30 04:22:50.715091 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:22:50.715103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:22:50.715115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-30 04:22:50.715126 | orchestrator | 2026-01-30 04:22:50.715137 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-30 04:22:50.715148 | orchestrator | Friday 30 January 2026 04:22:42 +0000 (0:00:01.474) 0:00:39.426 ******** 2026-01-30 04:22:50.715160 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-30 04:22:50.715198 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-30 04:22:50.715221 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-30 04:22:50.715235 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 04:22:50.715247 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 04:22:50.715266 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-30 04:22:50.715285 | orchestrator | 2026-01-30 04:22:50.715304 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-30 04:22:50.715337 | orchestrator | Friday 30 January 2026 04:22:44 +0000 (0:00:02.577) 0:00:42.003 ******** 2026-01-30 04:22:50.715358 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-30 04:22:50.715382 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-30 04:22:50.715399 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-30 04:22:50.715413 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-30 04:22:50.715425 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-30 04:22:50.715436 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-30 04:22:50.715474 | orchestrator | 2026-01-30 04:22:50.715548 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-30 04:22:50.715568 | orchestrator | Friday 30 January 2026 04:22:45 +0000 (0:00:01.032) 0:00:43.035 ******** 2026-01-30 04:22:50.715585 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:22:50.715604 | orchestrator | 2026-01-30 04:22:50.715621 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-30 04:22:50.715639 | orchestrator | Friday 30 January 2026 04:22:45 +0000 (0:00:00.133) 0:00:43.169 ******** 2026-01-30 04:22:50.715657 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:22:50.715675 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:22:50.715694 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:22:50.715713 | orchestrator | 2026-01-30 04:22:50.715732 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-30 04:22:50.715751 | orchestrator | Friday 30 January 2026 04:22:46 +0000 (0:00:00.449) 0:00:43.619 ******** 2026-01-30 04:22:50.715771 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:22:50.715789 | orchestrator | 2026-01-30 04:22:50.715806 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-30 04:22:50.715825 | orchestrator | Friday 30 January 2026 04:22:46 +0000 (0:00:00.525) 0:00:44.144 ******** 2026-01-30 04:22:50.715863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:51.554805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:51.554916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:51.554953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.554965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.554976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:51.555142 | orchestrator | 2026-01-30 04:22:51.555153 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-30 04:22:51.555165 | orchestrator | Friday 30 January 2026 04:22:50 +0000 (0:00:04.000) 0:00:48.144 ******** 2026-01-30 04:22:51.555184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:51.654773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.654891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.654908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.654921 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:22:51.654935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:51.654948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.654978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.655021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.655121 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:22:51.655164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:51.655185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.655198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.655210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:51.655231 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:22:51.655243 | orchestrator | 2026-01-30 04:22:51.655256 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-30 04:22:51.655277 | orchestrator | Friday 30 January 2026 04:22:51 +0000 (0:00:00.845) 0:00:48.989 ******** 2026-01-30 04:22:52.196626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:52.196729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196770 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:22:52.196784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:52.196846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196890 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:22:52.196902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:22:52.196914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:22:52.196941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:22:56.517907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:22:56.518011 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:22:56.518133 | orchestrator | 2026-01-30 04:22:56.518217 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-30 04:22:56.518233 | orchestrator | Friday 30 January 2026 04:22:52 +0000 (0:00:00.840) 0:00:49.830 ******** 2026-01-30 04:22:56.518261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:56.518276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:56.518288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:22:56.518355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:22:56.518458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838515 | orchestrator | 2026-01-30 04:23:08.838529 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-30 04:23:08.838542 | orchestrator | Friday 30 January 2026 04:22:56 +0000 (0:00:04.111) 0:00:53.941 ******** 2026-01-30 04:23:08.838553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-30 04:23:08.838565 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-30 04:23:08.838576 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-30 04:23:08.838587 | orchestrator | 2026-01-30 04:23:08.838598 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-30 04:23:08.838609 | orchestrator | Friday 30 January 2026 04:22:58 +0000 (0:00:01.807) 0:00:55.749 ******** 2026-01-30 04:23:08.838622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:08.838659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:08.838698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:08.838711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:08.838792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:11.027780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:11.027873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:11.027913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:11.027924 | orchestrator | 2026-01-30 04:23:11.027933 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-30 04:23:11.027940 | orchestrator | Friday 30 January 2026 04:23:08 +0000 (0:00:10.515) 0:01:06.264 ******** 2026-01-30 04:23:11.027946 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:23:11.027952 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:23:11.027957 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:23:11.027962 | orchestrator | 2026-01-30 04:23:11.027967 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-30 04:23:11.027972 | orchestrator | Friday 30 January 2026 04:23:10 +0000 (0:00:01.520) 0:01:07.785 ******** 2026-01-30 04:23:11.027979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:23:11.027998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:23:11.028018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:23:11.028025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:23:11.028035 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:23:11.028115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:23:11.028122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:23:11.028128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:23:11.028145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:23:14.731406 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:23:14.731524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-30 04:23:14.731569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:23:14.731582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 04:23:14.731595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 04:23:14.731607 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:23:14.731619 | orchestrator | 2026-01-30 04:23:14.731631 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-30 04:23:14.731644 | orchestrator | Friday 30 January 2026 04:23:11 +0000 (0:00:00.677) 0:01:08.462 ******** 2026-01-30 04:23:14.731655 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:23:14.731666 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:23:14.731677 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:23:14.731688 | orchestrator | 2026-01-30 04:23:14.731699 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-30 04:23:14.731710 | orchestrator | Friday 30 January 2026 04:23:11 +0000 (0:00:00.511) 0:01:08.974 ******** 2026-01-30 04:23:14.731756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:14.731779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:14.731791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-30 04:23:14.731803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:14.731815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:14.731832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:23:14.731858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-30 04:24:55.590845 | orchestrator | 2026-01-30 04:24:55.590869 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-30 04:24:55.590891 | orchestrator | Friday 30 January 2026 04:23:14 +0000 (0:00:03.181) 0:01:12.156 ******** 2026-01-30 04:24:55.590909 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:24:55.590929 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:24:55.590948 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:24:55.590961 | orchestrator | 2026-01-30 04:24:55.590972 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-30 04:24:55.590983 | orchestrator | Friday 30 January 2026 04:23:15 +0000 (0:00:00.325) 0:01:12.482 ******** 2026-01-30 04:24:55.590995 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591068 | orchestrator | 2026-01-30 04:24:55.591100 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-30 04:24:55.591112 | orchestrator | Friday 30 January 2026 04:23:17 +0000 (0:00:02.235) 0:01:14.718 ******** 2026-01-30 04:24:55.591124 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591137 | orchestrator | 2026-01-30 04:24:55.591150 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-30 04:24:55.591162 | orchestrator | Friday 30 January 2026 04:23:19 +0000 (0:00:02.201) 0:01:16.919 ******** 2026-01-30 04:24:55.591174 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591188 | orchestrator | 2026-01-30 04:24:55.591213 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-30 04:24:55.591242 | orchestrator | Friday 30 January 2026 04:23:38 +0000 (0:00:19.265) 0:01:36.185 ******** 2026-01-30 04:24:55.591263 | orchestrator | 2026-01-30 04:24:55.591283 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-30 04:24:55.591304 | orchestrator | Friday 30 January 2026 04:23:39 +0000 (0:00:00.259) 0:01:36.445 ******** 2026-01-30 04:24:55.591322 | orchestrator | 2026-01-30 04:24:55.591343 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-30 04:24:55.591364 | orchestrator | Friday 30 January 2026 04:23:39 +0000 (0:00:00.065) 0:01:36.510 ******** 2026-01-30 04:24:55.591385 | orchestrator | 2026-01-30 04:24:55.591406 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-30 04:24:55.591422 | orchestrator | Friday 30 January 2026 04:23:39 +0000 (0:00:00.066) 0:01:36.577 ******** 2026-01-30 04:24:55.591434 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591445 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:24:55.591456 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:24:55.591467 | orchestrator | 2026-01-30 04:24:55.591477 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-30 04:24:55.591488 | orchestrator | Friday 30 January 2026 04:24:10 +0000 (0:00:31.481) 0:02:08.058 ******** 2026-01-30 04:24:55.591499 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:24:55.591510 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591521 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:24:55.591531 | orchestrator | 2026-01-30 04:24:55.591543 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-30 04:24:55.591554 | orchestrator | Friday 30 January 2026 04:24:20 +0000 (0:00:10.054) 0:02:18.113 ******** 2026-01-30 04:24:55.591565 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591576 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:24:55.591601 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:24:55.591612 | orchestrator | 2026-01-30 04:24:55.591623 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-30 04:24:55.591647 | orchestrator | Friday 30 January 2026 04:24:49 +0000 (0:00:28.513) 0:02:46.626 ******** 2026-01-30 04:24:55.591658 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:24:55.591670 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:24:55.591681 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:24:55.591691 | orchestrator | 2026-01-30 04:24:55.591703 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-30 04:24:55.591715 | orchestrator | Friday 30 January 2026 04:24:55 +0000 (0:00:06.020) 0:02:52.647 ******** 2026-01-30 04:24:55.591726 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:24:55.591738 | orchestrator | 2026-01-30 04:24:55.591749 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:24:55.591761 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-30 04:24:55.591774 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:24:55.591784 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:24:55.591795 | orchestrator | 2026-01-30 04:24:55.591807 | orchestrator | 2026-01-30 04:24:55.591818 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:24:55.591829 | orchestrator | Friday 30 January 2026 04:24:55 +0000 (0:00:00.264) 0:02:52.911 ******** 2026-01-30 04:24:55.591840 | orchestrator | =============================================================================== 2026-01-30 04:24:55.591851 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.48s 2026-01-30 04:24:55.591870 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 28.51s 2026-01-30 04:24:55.591882 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.27s 2026-01-30 04:24:55.591893 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.52s 2026-01-30 04:24:55.591903 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.05s 2026-01-30 04:24:55.591914 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.52s 2026-01-30 04:24:55.591925 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.52s 2026-01-30 04:24:55.591936 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.02s 2026-01-30 04:24:55.591947 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.21s 2026-01-30 04:24:55.591958 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.11s 2026-01-30 04:24:55.591969 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.00s 2026-01-30 04:24:55.591980 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.51s 2026-01-30 04:24:55.591991 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.42s 2026-01-30 04:24:55.592033 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.35s 2026-01-30 04:24:55.592064 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.31s 2026-01-30 04:24:55.931890 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.18s 2026-01-30 04:24:55.931990 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.58s 2026-01-30 04:24:55.932050 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.24s 2026-01-30 04:24:55.932063 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.20s 2026-01-30 04:24:55.932074 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.02s 2026-01-30 04:24:58.255968 | orchestrator | 2026-01-30 04:24:58 | INFO  | Task e79e92a0-3113-493c-aa12-049eae8464b6 (barbican) was prepared for execution. 2026-01-30 04:24:58.256153 | orchestrator | 2026-01-30 04:24:58 | INFO  | It takes a moment until task e79e92a0-3113-493c-aa12-049eae8464b6 (barbican) has been started and output is visible here. 2026-01-30 04:25:42.383499 | orchestrator | 2026-01-30 04:25:42.383727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:25:42.384421 | orchestrator | 2026-01-30 04:25:42.384458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:25:42.384476 | orchestrator | Friday 30 January 2026 04:25:02 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-01-30 04:25:42.384490 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:25:42.384506 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:25:42.384519 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:25:42.384533 | orchestrator | 2026-01-30 04:25:42.384547 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:25:42.384561 | orchestrator | Friday 30 January 2026 04:25:02 +0000 (0:00:00.317) 0:00:00.579 ******** 2026-01-30 04:25:42.384576 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-30 04:25:42.384591 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-30 04:25:42.384604 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-30 04:25:42.384617 | orchestrator | 2026-01-30 04:25:42.384631 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-30 04:25:42.384644 | orchestrator | 2026-01-30 04:25:42.384658 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-30 04:25:42.384672 | orchestrator | Friday 30 January 2026 04:25:03 +0000 (0:00:00.432) 0:00:01.012 ******** 2026-01-30 04:25:42.384686 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:25:42.384700 | orchestrator | 2026-01-30 04:25:42.384713 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-30 04:25:42.384727 | orchestrator | Friday 30 January 2026 04:25:03 +0000 (0:00:00.523) 0:00:01.535 ******** 2026-01-30 04:25:42.384741 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-30 04:25:42.384755 | orchestrator | 2026-01-30 04:25:42.384769 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-30 04:25:42.384782 | orchestrator | Friday 30 January 2026 04:25:07 +0000 (0:00:03.790) 0:00:05.326 ******** 2026-01-30 04:25:42.384795 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-30 04:25:42.384809 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-30 04:25:42.384823 | orchestrator | 2026-01-30 04:25:42.384836 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-30 04:25:42.384850 | orchestrator | Friday 30 January 2026 04:25:13 +0000 (0:00:06.113) 0:00:11.440 ******** 2026-01-30 04:25:42.384864 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:25:42.384878 | orchestrator | 2026-01-30 04:25:42.384891 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-30 04:25:42.384904 | orchestrator | Friday 30 January 2026 04:25:16 +0000 (0:00:03.223) 0:00:14.663 ******** 2026-01-30 04:25:42.384918 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:25:42.384986 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-30 04:25:42.385001 | orchestrator | 2026-01-30 04:25:42.385016 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-30 04:25:42.385050 | orchestrator | Friday 30 January 2026 04:25:20 +0000 (0:00:04.060) 0:00:18.723 ******** 2026-01-30 04:25:42.385066 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:25:42.385081 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-30 04:25:42.385110 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-30 04:25:42.385134 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-30 04:25:42.385149 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-30 04:25:42.385187 | orchestrator | 2026-01-30 04:25:42.385201 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-30 04:25:42.385215 | orchestrator | Friday 30 January 2026 04:25:36 +0000 (0:00:15.788) 0:00:34.511 ******** 2026-01-30 04:25:42.385228 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-30 04:25:42.385241 | orchestrator | 2026-01-30 04:25:42.385254 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-30 04:25:42.385268 | orchestrator | Friday 30 January 2026 04:25:40 +0000 (0:00:03.975) 0:00:38.487 ******** 2026-01-30 04:25:42.385285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:42.385326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:42.385339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:42.385359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:42.385383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:42.385396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:42.385419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042340 | orchestrator | 2026-01-30 04:25:48.042347 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-30 04:25:48.042353 | orchestrator | Friday 30 January 2026 04:25:42 +0000 (0:00:01.623) 0:00:40.111 ******** 2026-01-30 04:25:48.042359 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-30 04:25:48.042364 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-30 04:25:48.042369 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-30 04:25:48.042373 | orchestrator | 2026-01-30 04:25:48.042378 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-30 04:25:48.042397 | orchestrator | Friday 30 January 2026 04:25:43 +0000 (0:00:01.033) 0:00:41.145 ******** 2026-01-30 04:25:48.042403 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:25:48.042408 | orchestrator | 2026-01-30 04:25:48.042413 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-30 04:25:48.042417 | orchestrator | Friday 30 January 2026 04:25:43 +0000 (0:00:00.291) 0:00:41.437 ******** 2026-01-30 04:25:48.042422 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:25:48.042427 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:25:48.042441 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:25:48.042446 | orchestrator | 2026-01-30 04:25:48.042451 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-30 04:25:48.042456 | orchestrator | Friday 30 January 2026 04:25:44 +0000 (0:00:00.331) 0:00:41.768 ******** 2026-01-30 04:25:48.042461 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:25:48.042466 | orchestrator | 2026-01-30 04:25:48.042470 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-30 04:25:48.042475 | orchestrator | Friday 30 January 2026 04:25:44 +0000 (0:00:00.493) 0:00:42.262 ******** 2026-01-30 04:25:48.042481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:48.042498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:48.042503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:48.042514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:48.042543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:49.386660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:25:49.386817 | orchestrator | 2026-01-30 04:25:49.386849 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-30 04:25:49.386870 | orchestrator | Friday 30 January 2026 04:25:48 +0000 (0:00:03.507) 0:00:45.769 ******** 2026-01-30 04:25:49.386941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:49.386963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.386983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.387002 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:25:49.387024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:49.387069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.387104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.387117 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:25:49.387135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:49.387147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.387160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:49.387173 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:25:49.387186 | orchestrator | 2026-01-30 04:25:49.387200 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-30 04:25:49.387213 | orchestrator | Friday 30 January 2026 04:25:48 +0000 (0:00:00.587) 0:00:46.357 ******** 2026-01-30 04:25:49.387235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:52.833227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833319 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:25:52.833326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:52.833331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833354 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:25:52.833370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:25:52.833374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:25:52.833385 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:25:52.833389 | orchestrator | 2026-01-30 04:25:52.833394 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-30 04:25:52.833399 | orchestrator | Friday 30 January 2026 04:25:49 +0000 (0:00:00.765) 0:00:47.122 ******** 2026-01-30 04:25:52.833403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:52.833408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:25:52.833419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:01.776224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:01.776355 | orchestrator | 2026-01-30 04:26:01.776361 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-30 04:26:01.776367 | orchestrator | Friday 30 January 2026 04:25:52 +0000 (0:00:03.443) 0:00:50.566 ******** 2026-01-30 04:26:01.776372 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:01.776378 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:26:01.776383 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:26:01.776388 | orchestrator | 2026-01-30 04:26:01.776405 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-30 04:26:01.776410 | orchestrator | Friday 30 January 2026 04:25:54 +0000 (0:00:01.447) 0:00:52.013 ******** 2026-01-30 04:26:01.776415 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:26:01.776420 | orchestrator | 2026-01-30 04:26:01.776425 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-30 04:26:01.776430 | orchestrator | Friday 30 January 2026 04:25:55 +0000 (0:00:00.890) 0:00:52.904 ******** 2026-01-30 04:26:01.776435 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:26:01.776439 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:26:01.776444 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:26:01.776449 | orchestrator | 2026-01-30 04:26:01.776453 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-30 04:26:01.776458 | orchestrator | Friday 30 January 2026 04:25:55 +0000 (0:00:00.526) 0:00:53.431 ******** 2026-01-30 04:26:01.776491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:01.776498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:01.776508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:01.776517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:02.562471 | orchestrator | 2026-01-30 04:26:02.562482 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-30 04:26:02.562493 | orchestrator | Friday 30 January 2026 04:26:01 +0000 (0:00:06.080) 0:00:59.511 ******** 2026-01-30 04:26:02.562519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:26:02.562535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:26:02.562545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:26:02.562565 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:26:02.562577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:26:02.562586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:26:02.562596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:26:02.562605 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:26:02.562633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-30 04:26:04.932227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:26:04.932395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:26:04.932418 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:26:04.932433 | orchestrator | 2026-01-30 04:26:04.932445 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-30 04:26:04.932458 | orchestrator | Friday 30 January 2026 04:26:02 +0000 (0:00:00.782) 0:01:00.294 ******** 2026-01-30 04:26:04.932470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:04.932483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:04.932523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-30 04:26:04.932537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:26:04.932621 | orchestrator | 2026-01-30 04:26:04.932632 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-30 04:26:04.932655 | orchestrator | Friday 30 January 2026 04:26:04 +0000 (0:00:02.367) 0:01:02.661 ******** 2026-01-30 04:26:51.221337 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:26:51.221435 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:26:51.221444 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:26:51.221452 | orchestrator | 2026-01-30 04:26:51.221460 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-30 04:26:51.221468 | orchestrator | Friday 30 January 2026 04:26:05 +0000 (0:00:00.265) 0:01:02.926 ******** 2026-01-30 04:26:51.221476 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221482 | orchestrator | 2026-01-30 04:26:51.221490 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-30 04:26:51.221497 | orchestrator | Friday 30 January 2026 04:26:07 +0000 (0:00:02.171) 0:01:05.098 ******** 2026-01-30 04:26:51.221503 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221510 | orchestrator | 2026-01-30 04:26:51.221518 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-30 04:26:51.221524 | orchestrator | Friday 30 January 2026 04:26:09 +0000 (0:00:02.266) 0:01:07.365 ******** 2026-01-30 04:26:51.221532 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221539 | orchestrator | 2026-01-30 04:26:51.221546 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-30 04:26:51.221553 | orchestrator | Friday 30 January 2026 04:26:21 +0000 (0:00:12.143) 0:01:19.508 ******** 2026-01-30 04:26:51.221560 | orchestrator | 2026-01-30 04:26:51.221567 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-30 04:26:51.221574 | orchestrator | Friday 30 January 2026 04:26:21 +0000 (0:00:00.215) 0:01:19.723 ******** 2026-01-30 04:26:51.221580 | orchestrator | 2026-01-30 04:26:51.221587 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-30 04:26:51.221594 | orchestrator | Friday 30 January 2026 04:26:22 +0000 (0:00:00.064) 0:01:19.788 ******** 2026-01-30 04:26:51.221601 | orchestrator | 2026-01-30 04:26:51.221608 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-30 04:26:51.221615 | orchestrator | Friday 30 January 2026 04:26:22 +0000 (0:00:00.067) 0:01:19.856 ******** 2026-01-30 04:26:51.221621 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:26:51.221627 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221634 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:26:51.221641 | orchestrator | 2026-01-30 04:26:51.221648 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-30 04:26:51.221655 | orchestrator | Friday 30 January 2026 04:26:33 +0000 (0:00:11.123) 0:01:30.980 ******** 2026-01-30 04:26:51.221662 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:26:51.221669 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:26:51.221676 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221683 | orchestrator | 2026-01-30 04:26:51.221689 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-30 04:26:51.221696 | orchestrator | Friday 30 January 2026 04:26:41 +0000 (0:00:07.793) 0:01:38.773 ******** 2026-01-30 04:26:51.221704 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:26:51.221711 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:26:51.221718 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:26:51.221725 | orchestrator | 2026-01-30 04:26:51.221732 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:26:51.221740 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:26:51.221748 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:26:51.221755 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:26:51.221762 | orchestrator | 2026-01-30 04:26:51.221769 | orchestrator | 2026-01-30 04:26:51.221776 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:26:51.221848 | orchestrator | Friday 30 January 2026 04:26:50 +0000 (0:00:09.884) 0:01:48.658 ******** 2026-01-30 04:26:51.221858 | orchestrator | =============================================================================== 2026-01-30 04:26:51.221865 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.79s 2026-01-30 04:26:51.221872 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.14s 2026-01-30 04:26:51.221879 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.12s 2026-01-30 04:26:51.221886 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.88s 2026-01-30 04:26:51.221893 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.79s 2026-01-30 04:26:51.221900 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.11s 2026-01-30 04:26:51.221908 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.08s 2026-01-30 04:26:51.221917 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.06s 2026-01-30 04:26:51.221925 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.98s 2026-01-30 04:26:51.221933 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.79s 2026-01-30 04:26:51.221941 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.51s 2026-01-30 04:26:51.221948 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.44s 2026-01-30 04:26:51.221958 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.22s 2026-01-30 04:26:51.221966 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.37s 2026-01-30 04:26:51.221974 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.27s 2026-01-30 04:26:51.222011 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.17s 2026-01-30 04:26:51.222073 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.62s 2026-01-30 04:26:51.222079 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.45s 2026-01-30 04:26:51.222085 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.03s 2026-01-30 04:26:51.222100 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.89s 2026-01-30 04:26:53.503212 | orchestrator | 2026-01-30 04:26:53 | INFO  | Task 2b2820e3-1b48-4d0b-a2c6-836ae1f91cc4 (designate) was prepared for execution. 2026-01-30 04:26:53.503340 | orchestrator | 2026-01-30 04:26:53 | INFO  | It takes a moment until task 2b2820e3-1b48-4d0b-a2c6-836ae1f91cc4 (designate) has been started and output is visible here. 2026-01-30 04:27:25.229294 | orchestrator | 2026-01-30 04:27:25.229427 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:27:25.229449 | orchestrator | 2026-01-30 04:27:25.229466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:27:25.229483 | orchestrator | Friday 30 January 2026 04:26:57 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-01-30 04:27:25.229500 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:27:25.229518 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:27:25.229534 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:27:25.229548 | orchestrator | 2026-01-30 04:27:25.229558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:27:25.229568 | orchestrator | Friday 30 January 2026 04:26:57 +0000 (0:00:00.220) 0:00:00.412 ******** 2026-01-30 04:27:25.229578 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-30 04:27:25.229588 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-30 04:27:25.229598 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-30 04:27:25.229608 | orchestrator | 2026-01-30 04:27:25.229617 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-30 04:27:25.229627 | orchestrator | 2026-01-30 04:27:25.229637 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-30 04:27:25.229671 | orchestrator | Friday 30 January 2026 04:26:57 +0000 (0:00:00.314) 0:00:00.726 ******** 2026-01-30 04:27:25.229682 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:27:25.229693 | orchestrator | 2026-01-30 04:27:25.229703 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-30 04:27:25.229713 | orchestrator | Friday 30 January 2026 04:26:58 +0000 (0:00:00.419) 0:00:01.146 ******** 2026-01-30 04:27:25.229722 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-30 04:27:25.229732 | orchestrator | 2026-01-30 04:27:25.229742 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-30 04:27:25.229751 | orchestrator | Friday 30 January 2026 04:27:01 +0000 (0:00:03.462) 0:00:04.609 ******** 2026-01-30 04:27:25.229761 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-30 04:27:25.229771 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-30 04:27:25.229841 | orchestrator | 2026-01-30 04:27:25.229861 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-30 04:27:25.229878 | orchestrator | Friday 30 January 2026 04:27:08 +0000 (0:00:06.699) 0:00:11.308 ******** 2026-01-30 04:27:25.229894 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:27:25.229906 | orchestrator | 2026-01-30 04:27:25.229917 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-30 04:27:25.229928 | orchestrator | Friday 30 January 2026 04:27:11 +0000 (0:00:03.209) 0:00:14.518 ******** 2026-01-30 04:27:25.229939 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:27:25.229949 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-30 04:27:25.229960 | orchestrator | 2026-01-30 04:27:25.229971 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-30 04:27:25.229981 | orchestrator | Friday 30 January 2026 04:27:16 +0000 (0:00:04.301) 0:00:18.819 ******** 2026-01-30 04:27:25.229992 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:27:25.230004 | orchestrator | 2026-01-30 04:27:25.230086 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-30 04:27:25.230148 | orchestrator | Friday 30 January 2026 04:27:19 +0000 (0:00:03.348) 0:00:22.168 ******** 2026-01-30 04:27:25.230166 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-30 04:27:25.230182 | orchestrator | 2026-01-30 04:27:25.230199 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-30 04:27:25.230215 | orchestrator | Friday 30 January 2026 04:27:23 +0000 (0:00:03.784) 0:00:25.952 ******** 2026-01-30 04:27:25.230251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:25.230291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:25.230313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:25.230325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:25.230336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:25.230347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:25.230371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:25.230410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:31.622418 | orchestrator | 2026-01-30 04:27:31.622431 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-30 04:27:31.622445 | orchestrator | Friday 30 January 2026 04:27:26 +0000 (0:00:03.006) 0:00:28.959 ******** 2026-01-30 04:27:31.622459 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:27:31.622472 | orchestrator | 2026-01-30 04:27:31.622484 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-30 04:27:31.622496 | orchestrator | Friday 30 January 2026 04:27:26 +0000 (0:00:00.127) 0:00:29.086 ******** 2026-01-30 04:27:31.622509 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:27:31.622522 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:27:31.622534 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:27:31.622546 | orchestrator | 2026-01-30 04:27:31.622559 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-30 04:27:31.622571 | orchestrator | Friday 30 January 2026 04:27:26 +0000 (0:00:00.434) 0:00:29.521 ******** 2026-01-30 04:27:31.622595 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:27:31.622607 | orchestrator | 2026-01-30 04:27:31.622621 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-30 04:27:31.622634 | orchestrator | Friday 30 January 2026 04:27:27 +0000 (0:00:00.502) 0:00:30.023 ******** 2026-01-30 04:27:31.622656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:31.622684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:33.522752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:33.522957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:33.522984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:33.523288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:34.335180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:34.335259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:34.335267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:34.335293 | orchestrator | 2026-01-30 04:27:34.335300 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-30 04:27:34.335308 | orchestrator | Friday 30 January 2026 04:27:33 +0000 (0:00:06.295) 0:00:36.319 ******** 2026-01-30 04:27:34.335326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:34.335333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:34.335350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:34.335357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:34.335363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:34.335374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:34.335379 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:27:34.335390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:34.335395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:34.335401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:34.335410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065676 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:27:35.065701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:35.065712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:35.065721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065826 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:27:35.065836 | orchestrator | 2026-01-30 04:27:35.065845 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-30 04:27:35.065856 | orchestrator | Friday 30 January 2026 04:27:34 +0000 (0:00:00.920) 0:00:37.240 ******** 2026-01-30 04:27:35.065869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:35.065879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:35.065887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.065901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376316 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:27:35.376340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:35.376349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:35.376357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376414 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:27:35.376424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:27:35.376431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:27:35.376438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:27:35.376462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:27:39.614868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:27:39.615013 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:27:39.615041 | orchestrator | 2026-01-30 04:27:39.615063 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-30 04:27:39.615085 | orchestrator | Friday 30 January 2026 04:27:35 +0000 (0:00:00.932) 0:00:38.172 ******** 2026-01-30 04:27:39.615126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:39.615149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:39.615171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:39.615244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:39.615481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555817 | orchestrator | 2026-01-30 04:27:50.555822 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-30 04:27:50.555827 | orchestrator | Friday 30 January 2026 04:27:41 +0000 (0:00:06.103) 0:00:44.276 ******** 2026-01-30 04:27:50.555835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:50.555841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:50.555849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:27:50.555854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:50.555864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:27:58.409741 | orchestrator | 2026-01-30 04:27:58.409752 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-30 04:27:58.409762 | orchestrator | Friday 30 January 2026 04:27:54 +0000 (0:00:13.469) 0:00:57.745 ******** 2026-01-30 04:27:58.409856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-30 04:28:02.558746 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-30 04:28:02.558946 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-30 04:28:02.559733 | orchestrator | 2026-01-30 04:28:02.559760 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-30 04:28:02.559817 | orchestrator | Friday 30 January 2026 04:27:58 +0000 (0:00:03.461) 0:01:01.207 ******** 2026-01-30 04:28:02.559829 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-30 04:28:02.559840 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-30 04:28:02.559870 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-30 04:28:02.559882 | orchestrator | 2026-01-30 04:28:02.559893 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-30 04:28:02.559929 | orchestrator | Friday 30 January 2026 04:28:00 +0000 (0:00:02.358) 0:01:03.565 ******** 2026-01-30 04:28:02.559945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:02.559961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:02.559973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:02.560005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:02.560025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:02.560045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:02.560058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:02.560070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:02.560081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:02.560093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:02.560113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:05.291323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:05.291452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:05.291469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:05.291482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:05.291494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:05.291506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:05.291536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:05.291558 | orchestrator | 2026-01-30 04:28:05.291571 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-30 04:28:05.291590 | orchestrator | Friday 30 January 2026 04:28:03 +0000 (0:00:02.849) 0:01:06.415 ******** 2026-01-30 04:28:05.291603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:05.291617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:05.291628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:05.291640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:05.291658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:06.243298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:06.243356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:06.243376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:06.243382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:06.243393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:06.243400 | orchestrator | 2026-01-30 04:28:06.243407 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-30 04:28:06.243418 | orchestrator | Friday 30 January 2026 04:28:06 +0000 (0:00:02.621) 0:01:09.037 ******** 2026-01-30 04:28:07.143731 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:28:07.143892 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:28:07.143928 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:28:07.143940 | orchestrator | 2026-01-30 04:28:07.143953 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-30 04:28:07.143966 | orchestrator | Friday 30 January 2026 04:28:06 +0000 (0:00:00.298) 0:01:09.335 ******** 2026-01-30 04:28:07.143980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:07.143996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:28:07.144009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144108 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:28:07.144132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:07.144145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:28:07.144157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:07.144211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:28:10.329756 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:28:10.329894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-30 04:28:10.329914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 04:28:10.329927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 04:28:10.329940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 04:28:10.329976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 04:28:10.329989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:28:10.330001 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:28:10.330012 | orchestrator | 2026-01-30 04:28:10.330161 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-30 04:28:10.330179 | orchestrator | Friday 30 January 2026 04:28:07 +0000 (0:00:00.713) 0:01:10.049 ******** 2026-01-30 04:28:10.330191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:28:10.330204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:28:10.330216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-30 04:28:10.330237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:10.330261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.006997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:28:12.007009 | orchestrator | 2026-01-30 04:28:12.007022 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-30 04:28:12.007038 | orchestrator | Friday 30 January 2026 04:28:11 +0000 (0:00:04.279) 0:01:14.329 ******** 2026-01-30 04:28:12.007064 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:28:12.007096 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:29:36.616229 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:29:36.616332 | orchestrator | 2026-01-30 04:29:36.616344 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-30 04:29:36.616355 | orchestrator | Friday 30 January 2026 04:28:12 +0000 (0:00:00.476) 0:01:14.805 ******** 2026-01-30 04:29:36.616363 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-30 04:29:36.616406 | orchestrator | 2026-01-30 04:29:36.616415 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-30 04:29:36.616424 | orchestrator | Friday 30 January 2026 04:28:14 +0000 (0:00:02.184) 0:01:16.990 ******** 2026-01-30 04:29:36.616432 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-30 04:29:36.616441 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-30 04:29:36.616449 | orchestrator | 2026-01-30 04:29:36.616457 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-30 04:29:36.616466 | orchestrator | Friday 30 January 2026 04:28:16 +0000 (0:00:02.262) 0:01:19.252 ******** 2026-01-30 04:29:36.616474 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616482 | orchestrator | 2026-01-30 04:29:36.616490 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-30 04:29:36.616498 | orchestrator | Friday 30 January 2026 04:28:32 +0000 (0:00:15.895) 0:01:35.147 ******** 2026-01-30 04:29:36.616527 | orchestrator | 2026-01-30 04:29:36.616536 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-30 04:29:36.616544 | orchestrator | Friday 30 January 2026 04:28:32 +0000 (0:00:00.065) 0:01:35.213 ******** 2026-01-30 04:29:36.616552 | orchestrator | 2026-01-30 04:29:36.616560 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-30 04:29:36.616568 | orchestrator | Friday 30 January 2026 04:28:32 +0000 (0:00:00.062) 0:01:35.275 ******** 2026-01-30 04:29:36.616576 | orchestrator | 2026-01-30 04:29:36.616585 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-30 04:29:36.616593 | orchestrator | Friday 30 January 2026 04:28:32 +0000 (0:00:00.067) 0:01:35.343 ******** 2026-01-30 04:29:36.616601 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616609 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.616617 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.616625 | orchestrator | 2026-01-30 04:29:36.616634 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-30 04:29:36.616642 | orchestrator | Friday 30 January 2026 04:28:40 +0000 (0:00:07.709) 0:01:43.052 ******** 2026-01-30 04:29:36.616650 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616658 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.616666 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.616674 | orchestrator | 2026-01-30 04:29:36.616686 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-30 04:29:36.616699 | orchestrator | Friday 30 January 2026 04:28:50 +0000 (0:00:10.464) 0:01:53.517 ******** 2026-01-30 04:29:36.616712 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616725 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.616794 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.616812 | orchestrator | 2026-01-30 04:29:36.616825 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-30 04:29:36.616838 | orchestrator | Friday 30 January 2026 04:29:00 +0000 (0:00:10.232) 0:02:03.749 ******** 2026-01-30 04:29:36.616851 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.616864 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.616877 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616891 | orchestrator | 2026-01-30 04:29:36.616904 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-30 04:29:36.616918 | orchestrator | Friday 30 January 2026 04:29:09 +0000 (0:00:08.666) 0:02:12.416 ******** 2026-01-30 04:29:36.616932 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.616946 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.616960 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.616974 | orchestrator | 2026-01-30 04:29:36.616988 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-30 04:29:36.617001 | orchestrator | Friday 30 January 2026 04:29:20 +0000 (0:00:10.640) 0:02:23.056 ******** 2026-01-30 04:29:36.617015 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:29:36.617029 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:29:36.617042 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.617056 | orchestrator | 2026-01-30 04:29:36.617071 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-30 04:29:36.617085 | orchestrator | Friday 30 January 2026 04:29:29 +0000 (0:00:08.759) 0:02:31.816 ******** 2026-01-30 04:29:36.617099 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:29:36.617113 | orchestrator | 2026-01-30 04:29:36.617123 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:29:36.617134 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:29:36.617146 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:29:36.617155 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:29:36.617174 | orchestrator | 2026-01-30 04:29:36.617182 | orchestrator | 2026-01-30 04:29:36.617190 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:29:36.617198 | orchestrator | Friday 30 January 2026 04:29:36 +0000 (0:00:07.280) 0:02:39.097 ******** 2026-01-30 04:29:36.617206 | orchestrator | =============================================================================== 2026-01-30 04:29:36.617214 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.90s 2026-01-30 04:29:36.617234 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.47s 2026-01-30 04:29:36.617259 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.64s 2026-01-30 04:29:36.617268 | orchestrator | designate : Restart designate-api container ---------------------------- 10.47s 2026-01-30 04:29:36.617276 | orchestrator | designate : Restart designate-central container ------------------------ 10.23s 2026-01-30 04:29:36.617284 | orchestrator | designate : Restart designate-worker container -------------------------- 8.76s 2026-01-30 04:29:36.617292 | orchestrator | designate : Restart designate-producer container ------------------------ 8.67s 2026-01-30 04:29:36.617300 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.71s 2026-01-30 04:29:36.617308 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.28s 2026-01-30 04:29:36.617315 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.70s 2026-01-30 04:29:36.617323 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.30s 2026-01-30 04:29:36.617331 | orchestrator | designate : Copying over config.json files for services ----------------- 6.10s 2026-01-30 04:29:36.617339 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.30s 2026-01-30 04:29:36.617347 | orchestrator | designate : Check designate containers ---------------------------------- 4.28s 2026-01-30 04:29:36.617355 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.78s 2026-01-30 04:29:36.617362 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.46s 2026-01-30 04:29:36.617370 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.46s 2026-01-30 04:29:36.617378 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.35s 2026-01-30 04:29:36.617386 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.21s 2026-01-30 04:29:36.617394 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.01s 2026-01-30 04:29:38.868253 | orchestrator | 2026-01-30 04:29:38 | INFO  | Task 4d1ea2c8-7146-4492-8730-7bf4913f5b40 (octavia) was prepared for execution. 2026-01-30 04:29:38.868354 | orchestrator | 2026-01-30 04:29:38 | INFO  | It takes a moment until task 4d1ea2c8-7146-4492-8730-7bf4913f5b40 (octavia) has been started and output is visible here. 2026-01-30 04:31:47.951376 | orchestrator | 2026-01-30 04:31:47.951493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:31:47.951510 | orchestrator | 2026-01-30 04:31:47.951522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:31:47.951534 | orchestrator | Friday 30 January 2026 04:29:42 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-01-30 04:31:47.951545 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:31:47.951557 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:31:47.951569 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:31:47.951580 | orchestrator | 2026-01-30 04:31:47.951591 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:31:47.951602 | orchestrator | Friday 30 January 2026 04:29:43 +0000 (0:00:00.331) 0:00:00.578 ******** 2026-01-30 04:31:47.951613 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-30 04:31:47.951625 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-30 04:31:47.951637 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-30 04:31:47.951673 | orchestrator | 2026-01-30 04:31:47.951686 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-30 04:31:47.951697 | orchestrator | 2026-01-30 04:31:47.951763 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:31:47.951776 | orchestrator | Friday 30 January 2026 04:29:43 +0000 (0:00:00.397) 0:00:00.976 ******** 2026-01-30 04:31:47.951788 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:31:47.951799 | orchestrator | 2026-01-30 04:31:47.951811 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-30 04:31:47.951822 | orchestrator | Friday 30 January 2026 04:29:44 +0000 (0:00:00.530) 0:00:01.507 ******** 2026-01-30 04:31:47.951833 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-30 04:31:47.951844 | orchestrator | 2026-01-30 04:31:47.951855 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-30 04:31:47.951866 | orchestrator | Friday 30 January 2026 04:29:47 +0000 (0:00:03.549) 0:00:05.056 ******** 2026-01-30 04:31:47.951877 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-30 04:31:47.951888 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-30 04:31:47.951900 | orchestrator | 2026-01-30 04:31:47.951914 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-30 04:31:47.951926 | orchestrator | Friday 30 January 2026 04:29:54 +0000 (0:00:06.586) 0:00:11.643 ******** 2026-01-30 04:31:47.951939 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:31:47.951952 | orchestrator | 2026-01-30 04:31:47.951964 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-30 04:31:47.951977 | orchestrator | Friday 30 January 2026 04:29:57 +0000 (0:00:03.314) 0:00:14.957 ******** 2026-01-30 04:31:47.951989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:31:47.952001 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-30 04:31:47.952014 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-30 04:31:47.952026 | orchestrator | 2026-01-30 04:31:47.952039 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-30 04:31:47.952065 | orchestrator | Friday 30 January 2026 04:30:06 +0000 (0:00:08.518) 0:00:23.476 ******** 2026-01-30 04:31:47.952078 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:31:47.952090 | orchestrator | 2026-01-30 04:31:47.952103 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-30 04:31:47.952115 | orchestrator | Friday 30 January 2026 04:30:09 +0000 (0:00:03.312) 0:00:26.789 ******** 2026-01-30 04:31:47.952128 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-30 04:31:47.952140 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-30 04:31:47.952152 | orchestrator | 2026-01-30 04:31:47.952165 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-30 04:31:47.952177 | orchestrator | Friday 30 January 2026 04:30:16 +0000 (0:00:07.527) 0:00:34.316 ******** 2026-01-30 04:31:47.952190 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-30 04:31:47.952202 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-30 04:31:47.952214 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-30 04:31:47.952226 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-30 04:31:47.952239 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-30 04:31:47.952252 | orchestrator | 2026-01-30 04:31:47.952264 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:31:47.952275 | orchestrator | Friday 30 January 2026 04:30:33 +0000 (0:00:16.058) 0:00:50.375 ******** 2026-01-30 04:31:47.952295 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:31:47.952306 | orchestrator | 2026-01-30 04:31:47.952317 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-30 04:31:47.952329 | orchestrator | Friday 30 January 2026 04:30:33 +0000 (0:00:00.680) 0:00:51.055 ******** 2026-01-30 04:31:47.952339 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.952350 | orchestrator | 2026-01-30 04:31:47.952362 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-30 04:31:47.952373 | orchestrator | Friday 30 January 2026 04:30:38 +0000 (0:00:04.806) 0:00:55.862 ******** 2026-01-30 04:31:47.952384 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.952395 | orchestrator | 2026-01-30 04:31:47.952407 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-30 04:31:47.952435 | orchestrator | Friday 30 January 2026 04:30:43 +0000 (0:00:04.715) 0:01:00.577 ******** 2026-01-30 04:31:47.952447 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:31:47.952458 | orchestrator | 2026-01-30 04:31:47.952483 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-30 04:31:47.952505 | orchestrator | Friday 30 January 2026 04:30:46 +0000 (0:00:03.379) 0:01:03.957 ******** 2026-01-30 04:31:47.952517 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-30 04:31:47.952527 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-30 04:31:47.952539 | orchestrator | 2026-01-30 04:31:47.952550 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-30 04:31:47.952561 | orchestrator | Friday 30 January 2026 04:30:57 +0000 (0:00:11.074) 0:01:15.032 ******** 2026-01-30 04:31:47.952572 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-30 04:31:47.952583 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-30 04:31:47.952595 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-30 04:31:47.952608 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-30 04:31:47.952619 | orchestrator | 2026-01-30 04:31:47.952635 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-30 04:31:47.952647 | orchestrator | Friday 30 January 2026 04:31:13 +0000 (0:00:15.894) 0:01:30.926 ******** 2026-01-30 04:31:47.952657 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.952669 | orchestrator | 2026-01-30 04:31:47.952680 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-30 04:31:47.952691 | orchestrator | Friday 30 January 2026 04:31:18 +0000 (0:00:04.921) 0:01:35.848 ******** 2026-01-30 04:31:47.952726 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.952740 | orchestrator | 2026-01-30 04:31:47.952751 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-30 04:31:47.952762 | orchestrator | Friday 30 January 2026 04:31:23 +0000 (0:00:05.489) 0:01:41.338 ******** 2026-01-30 04:31:47.952773 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:31:47.952784 | orchestrator | 2026-01-30 04:31:47.952795 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-30 04:31:47.952806 | orchestrator | Friday 30 January 2026 04:31:24 +0000 (0:00:00.201) 0:01:41.540 ******** 2026-01-30 04:31:47.952818 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:31:47.952829 | orchestrator | 2026-01-30 04:31:47.952840 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:31:47.952851 | orchestrator | Friday 30 January 2026 04:31:28 +0000 (0:00:04.480) 0:01:46.020 ******** 2026-01-30 04:31:47.952862 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:31:47.952881 | orchestrator | 2026-01-30 04:31:47.952892 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-30 04:31:47.952903 | orchestrator | Friday 30 January 2026 04:31:29 +0000 (0:00:01.012) 0:01:47.033 ******** 2026-01-30 04:31:47.952919 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.952931 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.952942 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.952953 | orchestrator | 2026-01-30 04:31:47.952964 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-30 04:31:47.952975 | orchestrator | Friday 30 January 2026 04:31:34 +0000 (0:00:05.231) 0:01:52.265 ******** 2026-01-30 04:31:47.952986 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.952997 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.953008 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.953019 | orchestrator | 2026-01-30 04:31:47.953030 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-30 04:31:47.953041 | orchestrator | Friday 30 January 2026 04:31:40 +0000 (0:00:05.200) 0:01:57.465 ******** 2026-01-30 04:31:47.953052 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.953063 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.953074 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.953085 | orchestrator | 2026-01-30 04:31:47.953096 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-30 04:31:47.953107 | orchestrator | Friday 30 January 2026 04:31:41 +0000 (0:00:00.996) 0:01:58.462 ******** 2026-01-30 04:31:47.953118 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:31:47.953129 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:31:47.953139 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:31:47.953150 | orchestrator | 2026-01-30 04:31:47.953162 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-30 04:31:47.953173 | orchestrator | Friday 30 January 2026 04:31:43 +0000 (0:00:02.200) 0:02:00.662 ******** 2026-01-30 04:31:47.953184 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.953194 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.953205 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.953219 | orchestrator | 2026-01-30 04:31:47.953236 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-30 04:31:47.953255 | orchestrator | Friday 30 January 2026 04:31:44 +0000 (0:00:01.232) 0:02:01.895 ******** 2026-01-30 04:31:47.953283 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.953304 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.953321 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.953338 | orchestrator | 2026-01-30 04:31:47.953355 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-30 04:31:47.953370 | orchestrator | Friday 30 January 2026 04:31:45 +0000 (0:00:01.221) 0:02:03.116 ******** 2026-01-30 04:31:47.953385 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:31:47.953402 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:31:47.953485 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:31:47.953505 | orchestrator | 2026-01-30 04:31:47.953538 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-30 04:32:15.512393 | orchestrator | Friday 30 January 2026 04:31:47 +0000 (0:00:02.169) 0:02:05.285 ******** 2026-01-30 04:32:15.512493 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:32:15.512508 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:32:15.512518 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:32:15.512528 | orchestrator | 2026-01-30 04:32:15.512539 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-30 04:32:15.512549 | orchestrator | Friday 30 January 2026 04:31:49 +0000 (0:00:01.486) 0:02:06.772 ******** 2026-01-30 04:32:15.512559 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512569 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:32:15.512579 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:32:15.512588 | orchestrator | 2026-01-30 04:32:15.512598 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-30 04:32:15.512631 | orchestrator | Friday 30 January 2026 04:31:50 +0000 (0:00:00.638) 0:02:07.411 ******** 2026-01-30 04:32:15.512642 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512651 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:32:15.512660 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:32:15.512670 | orchestrator | 2026-01-30 04:32:15.512679 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:32:15.512689 | orchestrator | Friday 30 January 2026 04:31:53 +0000 (0:00:03.798) 0:02:11.209 ******** 2026-01-30 04:32:15.512753 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:32:15.512767 | orchestrator | 2026-01-30 04:32:15.512777 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-30 04:32:15.512787 | orchestrator | Friday 30 January 2026 04:31:54 +0000 (0:00:00.489) 0:02:11.698 ******** 2026-01-30 04:32:15.512796 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512806 | orchestrator | 2026-01-30 04:32:15.512815 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-30 04:32:15.512825 | orchestrator | Friday 30 January 2026 04:31:58 +0000 (0:00:04.128) 0:02:15.827 ******** 2026-01-30 04:32:15.512835 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512844 | orchestrator | 2026-01-30 04:32:15.512854 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-30 04:32:15.512863 | orchestrator | Friday 30 January 2026 04:32:01 +0000 (0:00:03.273) 0:02:19.100 ******** 2026-01-30 04:32:15.512873 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-30 04:32:15.512883 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-30 04:32:15.512893 | orchestrator | 2026-01-30 04:32:15.512903 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-30 04:32:15.512912 | orchestrator | Friday 30 January 2026 04:32:09 +0000 (0:00:07.867) 0:02:26.968 ******** 2026-01-30 04:32:15.512922 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512931 | orchestrator | 2026-01-30 04:32:15.512943 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-30 04:32:15.512953 | orchestrator | Friday 30 January 2026 04:32:13 +0000 (0:00:03.507) 0:02:30.476 ******** 2026-01-30 04:32:15.512964 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:32:15.512975 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:32:15.512985 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:32:15.512996 | orchestrator | 2026-01-30 04:32:15.513007 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-30 04:32:15.513018 | orchestrator | Friday 30 January 2026 04:32:13 +0000 (0:00:00.429) 0:02:30.905 ******** 2026-01-30 04:32:15.513046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:15.513078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:15.513099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:15.513111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:15.513124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:15.513141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:15.513153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:15.513166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:15.513192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:16.984599 | orchestrator | 2026-01-30 04:32:16.984618 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-30 04:32:16.984634 | orchestrator | Friday 30 January 2026 04:32:15 +0000 (0:00:02.369) 0:02:33.275 ******** 2026-01-30 04:32:16.984651 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:32:16.984668 | orchestrator | 2026-01-30 04:32:16.984684 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-30 04:32:16.984774 | orchestrator | Friday 30 January 2026 04:32:16 +0000 (0:00:00.132) 0:02:33.407 ******** 2026-01-30 04:32:16.984788 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:32:16.984817 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:32:16.984828 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:32:16.984838 | orchestrator | 2026-01-30 04:32:16.984848 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-30 04:32:16.984858 | orchestrator | Friday 30 January 2026 04:32:16 +0000 (0:00:00.294) 0:02:33.701 ******** 2026-01-30 04:32:16.984877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:16.984896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:16.984923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:16.984951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:16.984964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:16.984976 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:32:16.984996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:21.795134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:21.795247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:21.795281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:21.795316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:21.795330 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:32:21.795345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:21.795409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:21.795443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:21.795475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:21.795503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:21.795531 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:32:21.795552 | orchestrator | 2026-01-30 04:32:21.795572 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:32:21.795592 | orchestrator | Friday 30 January 2026 04:32:17 +0000 (0:00:00.709) 0:02:34.410 ******** 2026-01-30 04:32:21.795611 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:32:21.795630 | orchestrator | 2026-01-30 04:32:21.795650 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-30 04:32:21.795670 | orchestrator | Friday 30 January 2026 04:32:17 +0000 (0:00:00.661) 0:02:35.072 ******** 2026-01-30 04:32:21.795691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:21.795745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:21.795783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:23.326125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:23.326285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:23.326305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:23.326318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:23.326457 | orchestrator | 2026-01-30 04:32:23.326470 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-30 04:32:23.326483 | orchestrator | Friday 30 January 2026 04:32:22 +0000 (0:00:05.040) 0:02:40.113 ******** 2026-01-30 04:32:23.326505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:23.437226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:23.437337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.437351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.437362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:23.437373 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:32:23.437385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:23.437395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:23.437445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.437470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.437485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:23.437500 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:32:23.437515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:23.437530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:23.437543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.437580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.982466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:23.982550 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:32:23.982563 | orchestrator | 2026-01-30 04:32:23.982572 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-30 04:32:23.982580 | orchestrator | Friday 30 January 2026 04:32:23 +0000 (0:00:00.662) 0:02:40.775 ******** 2026-01-30 04:32:23.982589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:23.982600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:23.982609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.982636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.982658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:23.982666 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:32:23.982679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:23.982687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:23.982695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.982764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:23.982792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:23.982808 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:32:23.982827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 04:32:28.552532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 04:32:28.552626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 04:32:28.552638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 04:32:28.552648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 04:32:28.552676 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:32:28.552686 | orchestrator | 2026-01-30 04:32:28.552742 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-30 04:32:28.552752 | orchestrator | Friday 30 January 2026 04:32:24 +0000 (0:00:00.970) 0:02:41.746 ******** 2026-01-30 04:32:28.552761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:28.552799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:28.552808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:28.552816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:28.552829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:28.552837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:28.552845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:28.552861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:43.485959 | orchestrator | 2026-01-30 04:32:43.485973 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-30 04:32:43.485986 | orchestrator | Friday 30 January 2026 04:32:29 +0000 (0:00:05.204) 0:02:46.951 ******** 2026-01-30 04:32:43.485997 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-30 04:32:43.486009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-30 04:32:43.486099 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-30 04:32:43.486112 | orchestrator | 2026-01-30 04:32:43.486124 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-30 04:32:43.486137 | orchestrator | Friday 30 January 2026 04:32:31 +0000 (0:00:01.534) 0:02:48.485 ******** 2026-01-30 04:32:43.486152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:43.486178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:43.486192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:32:43.486230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:58.845951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:58.846187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:32:58.846248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:32:58.846470 | orchestrator | 2026-01-30 04:32:58.846490 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-30 04:32:58.846509 | orchestrator | Friday 30 January 2026 04:32:46 +0000 (0:00:15.460) 0:03:03.945 ******** 2026-01-30 04:32:58.846528 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:32:58.846548 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:32:58.846566 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:32:58.846583 | orchestrator | 2026-01-30 04:32:58.846600 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-30 04:32:58.846619 | orchestrator | Friday 30 January 2026 04:32:48 +0000 (0:00:02.053) 0:03:05.998 ******** 2026-01-30 04:32:58.846637 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-30 04:32:58.846655 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-30 04:32:58.846673 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-30 04:32:58.846691 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-30 04:32:58.846736 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-30 04:32:58.846752 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-30 04:32:58.846768 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-30 04:32:58.846784 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-30 04:32:58.846800 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-30 04:32:58.846817 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-30 04:32:58.846843 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-30 04:32:58.846860 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-30 04:32:58.846876 | orchestrator | 2026-01-30 04:32:58.846893 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-30 04:32:58.846909 | orchestrator | Friday 30 January 2026 04:32:53 +0000 (0:00:05.088) 0:03:11.087 ******** 2026-01-30 04:32:58.846937 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-30 04:32:58.846953 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-30 04:32:58.846984 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-30 04:33:07.018893 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.018971 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.018978 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.018984 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.018989 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.018994 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.018999 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019004 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019009 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019014 | orchestrator | 2026-01-30 04:33:07.019020 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-30 04:33:07.019026 | orchestrator | Friday 30 January 2026 04:32:58 +0000 (0:00:05.085) 0:03:16.172 ******** 2026-01-30 04:33:07.019031 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-30 04:33:07.019035 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-30 04:33:07.019040 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-30 04:33:07.019044 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.019049 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.019054 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-30 04:33:07.019059 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.019063 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.019068 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-30 04:33:07.019072 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019077 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019081 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-30 04:33:07.019086 | orchestrator | 2026-01-30 04:33:07.019091 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-30 04:33:07.019095 | orchestrator | Friday 30 January 2026 04:33:04 +0000 (0:00:05.193) 0:03:21.366 ******** 2026-01-30 04:33:07.019103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:33:07.019111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:33:07.019165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 04:33:07.019172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:33:07.019178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:33:07.019183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-30 04:33:07.019189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:33:07.019195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:33:07.019207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-30 04:33:07.019216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:34:30.404871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:34:30.405017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-30 04:34:30.405034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:30.405050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:30.405094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:30.405107 | orchestrator | 2026-01-30 04:34:30.405121 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-30 04:34:30.405150 | orchestrator | Friday 30 January 2026 04:33:07 +0000 (0:00:03.669) 0:03:25.036 ******** 2026-01-30 04:34:30.405161 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:34:30.405169 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:34:30.405176 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:34:30.405183 | orchestrator | 2026-01-30 04:34:30.405190 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-30 04:34:30.405197 | orchestrator | Friday 30 January 2026 04:33:08 +0000 (0:00:00.474) 0:03:25.511 ******** 2026-01-30 04:34:30.405204 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405211 | orchestrator | 2026-01-30 04:34:30.405217 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-30 04:34:30.405224 | orchestrator | Friday 30 January 2026 04:33:10 +0000 (0:00:02.329) 0:03:27.840 ******** 2026-01-30 04:34:30.405231 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405238 | orchestrator | 2026-01-30 04:34:30.405244 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-30 04:34:30.405251 | orchestrator | Friday 30 January 2026 04:33:12 +0000 (0:00:02.228) 0:03:30.069 ******** 2026-01-30 04:34:30.405258 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405265 | orchestrator | 2026-01-30 04:34:30.405272 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-30 04:34:30.405280 | orchestrator | Friday 30 January 2026 04:33:15 +0000 (0:00:02.389) 0:03:32.458 ******** 2026-01-30 04:34:30.405303 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405310 | orchestrator | 2026-01-30 04:34:30.405317 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-30 04:34:30.405324 | orchestrator | Friday 30 January 2026 04:33:17 +0000 (0:00:02.315) 0:03:34.774 ******** 2026-01-30 04:34:30.405331 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405337 | orchestrator | 2026-01-30 04:34:30.405344 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-30 04:34:30.405352 | orchestrator | Friday 30 January 2026 04:33:40 +0000 (0:00:22.706) 0:03:57.481 ******** 2026-01-30 04:34:30.405360 | orchestrator | 2026-01-30 04:34:30.405367 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-30 04:34:30.405375 | orchestrator | Friday 30 January 2026 04:33:40 +0000 (0:00:00.065) 0:03:57.546 ******** 2026-01-30 04:34:30.405382 | orchestrator | 2026-01-30 04:34:30.405389 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-30 04:34:30.405400 | orchestrator | Friday 30 January 2026 04:33:40 +0000 (0:00:00.060) 0:03:57.607 ******** 2026-01-30 04:34:30.405416 | orchestrator | 2026-01-30 04:34:30.405430 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-30 04:34:30.405441 | orchestrator | Friday 30 January 2026 04:33:40 +0000 (0:00:00.064) 0:03:57.672 ******** 2026-01-30 04:34:30.405452 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405463 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:34:30.405483 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:34:30.405493 | orchestrator | 2026-01-30 04:34:30.405503 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-30 04:34:30.405513 | orchestrator | Friday 30 January 2026 04:33:57 +0000 (0:00:17.231) 0:04:14.903 ******** 2026-01-30 04:34:30.405524 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405535 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:34:30.405546 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:34:30.405556 | orchestrator | 2026-01-30 04:34:30.405566 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-30 04:34:30.405577 | orchestrator | Friday 30 January 2026 04:34:08 +0000 (0:00:11.427) 0:04:26.331 ******** 2026-01-30 04:34:30.405588 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405597 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:34:30.405608 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:34:30.405618 | orchestrator | 2026-01-30 04:34:30.405628 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-30 04:34:30.405639 | orchestrator | Friday 30 January 2026 04:34:14 +0000 (0:00:05.319) 0:04:31.651 ******** 2026-01-30 04:34:30.405649 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:34:30.405660 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405671 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:34:30.405682 | orchestrator | 2026-01-30 04:34:30.405715 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-30 04:34:30.405729 | orchestrator | Friday 30 January 2026 04:34:24 +0000 (0:00:10.430) 0:04:42.082 ******** 2026-01-30 04:34:30.405741 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:34:30.405752 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:34:30.405763 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:34:30.405774 | orchestrator | 2026-01-30 04:34:30.405784 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:34:30.405797 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:34:30.405810 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:34:30.405821 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:34:30.405833 | orchestrator | 2026-01-30 04:34:30.405844 | orchestrator | 2026-01-30 04:34:30.405855 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:34:30.405866 | orchestrator | Friday 30 January 2026 04:34:30 +0000 (0:00:05.636) 0:04:47.718 ******** 2026-01-30 04:34:30.405877 | orchestrator | =============================================================================== 2026-01-30 04:34:30.405888 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.71s 2026-01-30 04:34:30.405899 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.23s 2026-01-30 04:34:30.405910 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.06s 2026-01-30 04:34:30.405930 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.89s 2026-01-30 04:34:30.405941 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.46s 2026-01-30 04:34:30.405952 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.43s 2026-01-30 04:34:30.405962 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.07s 2026-01-30 04:34:30.405973 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.43s 2026-01-30 04:34:30.405983 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.52s 2026-01-30 04:34:30.405994 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.87s 2026-01-30 04:34:30.406005 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.53s 2026-01-30 04:34:30.406088 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.59s 2026-01-30 04:34:30.406099 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.64s 2026-01-30 04:34:30.406111 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.49s 2026-01-30 04:34:30.406138 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.32s 2026-01-30 04:34:30.683640 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.23s 2026-01-30 04:34:30.683765 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.20s 2026-01-30 04:34:30.683777 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.20s 2026-01-30 04:34:30.683786 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.19s 2026-01-30 04:34:30.683794 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.09s 2026-01-30 04:34:33.006273 | orchestrator | 2026-01-30 04:34:33 | INFO  | Task 26439718-738d-4193-8190-2f2c2f59e226 (ceilometer) was prepared for execution. 2026-01-30 04:34:33.006353 | orchestrator | 2026-01-30 04:34:33 | INFO  | It takes a moment until task 26439718-738d-4193-8190-2f2c2f59e226 (ceilometer) has been started and output is visible here. 2026-01-30 04:34:55.412964 | orchestrator | 2026-01-30 04:34:55.413101 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:34:55.413128 | orchestrator | 2026-01-30 04:34:55.413147 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:34:55.413166 | orchestrator | Friday 30 January 2026 04:34:36 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-01-30 04:34:55.413184 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:34:55.413205 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:34:55.413223 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:34:55.413242 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:34:55.413260 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:34:55.413277 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:34:55.413294 | orchestrator | 2026-01-30 04:34:55.413312 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:34:55.413328 | orchestrator | Friday 30 January 2026 04:34:37 +0000 (0:00:00.663) 0:00:00.909 ******** 2026-01-30 04:34:55.413346 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413363 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413379 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413396 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413413 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413430 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-01-30 04:34:55.413446 | orchestrator | 2026-01-30 04:34:55.413462 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-01-30 04:34:55.413480 | orchestrator | 2026-01-30 04:34:55.413498 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-30 04:34:55.413515 | orchestrator | Friday 30 January 2026 04:34:38 +0000 (0:00:00.434) 0:00:01.343 ******** 2026-01-30 04:34:55.413533 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:34:55.413553 | orchestrator | 2026-01-30 04:34:55.413572 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-01-30 04:34:55.413589 | orchestrator | Friday 30 January 2026 04:34:38 +0000 (0:00:00.830) 0:00:02.174 ******** 2026-01-30 04:34:55.413607 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:34:55.413626 | orchestrator | 2026-01-30 04:34:55.413644 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-01-30 04:34:55.413661 | orchestrator | Friday 30 January 2026 04:34:38 +0000 (0:00:00.103) 0:00:02.277 ******** 2026-01-30 04:34:55.413753 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:34:55.413777 | orchestrator | 2026-01-30 04:34:55.413797 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-01-30 04:34:55.413815 | orchestrator | Friday 30 January 2026 04:34:39 +0000 (0:00:00.137) 0:00:02.414 ******** 2026-01-30 04:34:55.413834 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:34:55.413853 | orchestrator | 2026-01-30 04:34:55.413873 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-01-30 04:34:55.413893 | orchestrator | Friday 30 January 2026 04:34:42 +0000 (0:00:03.739) 0:00:06.153 ******** 2026-01-30 04:34:55.413912 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:34:55.413930 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-01-30 04:34:55.413948 | orchestrator | 2026-01-30 04:34:55.413967 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-01-30 04:34:55.414006 | orchestrator | Friday 30 January 2026 04:34:46 +0000 (0:00:03.725) 0:00:09.879 ******** 2026-01-30 04:34:55.414117 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:34:55.414137 | orchestrator | 2026-01-30 04:34:55.414154 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-01-30 04:34:55.414172 | orchestrator | Friday 30 January 2026 04:34:49 +0000 (0:00:03.236) 0:00:13.115 ******** 2026-01-30 04:34:55.414191 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-01-30 04:34:55.414208 | orchestrator | 2026-01-30 04:34:55.414228 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-01-30 04:34:55.414247 | orchestrator | Friday 30 January 2026 04:34:53 +0000 (0:00:03.997) 0:00:17.113 ******** 2026-01-30 04:34:55.414267 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:34:55.414285 | orchestrator | 2026-01-30 04:34:55.414303 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-01-30 04:34:55.414320 | orchestrator | Friday 30 January 2026 04:34:53 +0000 (0:00:00.134) 0:00:17.248 ******** 2026-01-30 04:34:55.414344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:55.414398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:55.414420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:55.414459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:55.414491 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:34:55.414512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:34:55.414529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:34:55.414563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:00.074218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:00.074328 | orchestrator | 2026-01-30 04:35:00.074339 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-01-30 04:35:00.074347 | orchestrator | Friday 30 January 2026 04:34:55 +0000 (0:00:01.439) 0:00:18.687 ******** 2026-01-30 04:35:00.074354 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:00.074365 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:35:00.074374 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:35:00.074384 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:35:00.074393 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:35:00.074402 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:35:00.074412 | orchestrator | 2026-01-30 04:35:00.074418 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-01-30 04:35:00.074426 | orchestrator | Friday 30 January 2026 04:34:56 +0000 (0:00:01.590) 0:00:20.278 ******** 2026-01-30 04:35:00.074432 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:35:00.074438 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:35:00.074444 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:35:00.074450 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:35:00.074455 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:35:00.074461 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:35:00.074467 | orchestrator | 2026-01-30 04:35:00.074473 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-01-30 04:35:00.074479 | orchestrator | Friday 30 January 2026 04:34:57 +0000 (0:00:00.566) 0:00:20.844 ******** 2026-01-30 04:35:00.074485 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:00.074492 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:00.074498 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:00.074504 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:00.074509 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:00.074515 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:00.074521 | orchestrator | 2026-01-30 04:35:00.074527 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-01-30 04:35:00.074534 | orchestrator | Friday 30 January 2026 04:34:58 +0000 (0:00:00.807) 0:00:21.652 ******** 2026-01-30 04:35:00.074540 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:35:00.074546 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:35:00.074551 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:35:00.074557 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:35:00.074563 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:35:00.074569 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:35:00.074578 | orchestrator | 2026-01-30 04:35:00.074623 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-01-30 04:35:00.074630 | orchestrator | Friday 30 January 2026 04:34:58 +0000 (0:00:00.622) 0:00:22.275 ******** 2026-01-30 04:35:00.074637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:00.074645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:00.074658 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:00.074679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:00.074686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:00.074692 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:00.074824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:00.074843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:00.074858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:00.074874 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:00.074882 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:00.074889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:00.074903 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:00.074919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731269 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:04.731377 | orchestrator | 2026-01-30 04:35:04.731394 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-01-30 04:35:04.731408 | orchestrator | Friday 30 January 2026 04:35:00 +0000 (0:00:01.077) 0:00:23.352 ******** 2026-01-30 04:35:04.731422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:04.731451 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:04.731479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:04.731527 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:04.731540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:04.731563 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:04.731592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731605 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:04.731616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731628 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:04.731644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:04.731664 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:04.731675 | orchestrator | 2026-01-30 04:35:04.731688 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-01-30 04:35:04.731740 | orchestrator | Friday 30 January 2026 04:35:00 +0000 (0:00:00.815) 0:00:24.167 ******** 2026-01-30 04:35:04.731754 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:04.731765 | orchestrator | 2026-01-30 04:35:04.731776 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-01-30 04:35:04.731788 | orchestrator | Friday 30 January 2026 04:35:01 +0000 (0:00:00.679) 0:00:24.846 ******** 2026-01-30 04:35:04.731799 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:35:04.731814 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:35:04.731826 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:35:04.731838 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:35:04.731850 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:35:04.731862 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:35:04.731874 | orchestrator | 2026-01-30 04:35:04.731886 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-01-30 04:35:04.731899 | orchestrator | Friday 30 January 2026 04:35:02 +0000 (0:00:00.800) 0:00:25.647 ******** 2026-01-30 04:35:04.731917 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:35:04.731940 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:35:04.731968 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:35:04.731985 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:35:04.732077 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:35:04.732099 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:35:04.732117 | orchestrator | 2026-01-30 04:35:04.732135 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-01-30 04:35:04.732152 | orchestrator | Friday 30 January 2026 04:35:03 +0000 (0:00:00.965) 0:00:26.612 ******** 2026-01-30 04:35:04.732171 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:04.732190 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:04.732210 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:04.732227 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:04.732319 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:04.732331 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:04.732370 | orchestrator | 2026-01-30 04:35:04.732384 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-01-30 04:35:04.732395 | orchestrator | Friday 30 January 2026 04:35:04 +0000 (0:00:00.785) 0:00:27.398 ******** 2026-01-30 04:35:04.732406 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:04.732417 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:04.732428 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:04.732439 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:04.732450 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:04.732461 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:04.732472 | orchestrator | 2026-01-30 04:35:09.603437 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-01-30 04:35:09.603563 | orchestrator | Friday 30 January 2026 04:35:04 +0000 (0:00:00.615) 0:00:28.014 ******** 2026-01-30 04:35:09.603585 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:09.603604 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:35:09.603620 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:35:09.603635 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:35:09.603646 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:35:09.603656 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:35:09.603665 | orchestrator | 2026-01-30 04:35:09.603734 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-01-30 04:35:09.603748 | orchestrator | Friday 30 January 2026 04:35:06 +0000 (0:00:01.566) 0:00:29.581 ******** 2026-01-30 04:35:09.603761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:09.603825 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:09.603836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:09.603857 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:09.603867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:09.603909 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:09.603926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603937 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:09.603955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603967 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:09.603979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:09.603990 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:09.604002 | orchestrator | 2026-01-30 04:35:09.604014 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-01-30 04:35:09.604026 | orchestrator | Friday 30 January 2026 04:35:07 +0000 (0:00:00.763) 0:00:30.345 ******** 2026-01-30 04:35:09.604038 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:09.604047 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:09.604057 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:09.604067 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:09.604076 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:09.604086 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:09.604096 | orchestrator | 2026-01-30 04:35:09.604105 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-01-30 04:35:09.604115 | orchestrator | Friday 30 January 2026 04:35:07 +0000 (0:00:00.792) 0:00:31.137 ******** 2026-01-30 04:35:09.604125 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:09.604135 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:35:09.604144 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:35:09.604154 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:35:09.604164 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:35:09.604173 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:35:09.604183 | orchestrator | 2026-01-30 04:35:09.604193 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-01-30 04:35:09.604203 | orchestrator | Friday 30 January 2026 04:35:09 +0000 (0:00:01.343) 0:00:32.481 ******** 2026-01-30 04:35:09.604227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.303379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:15.304442 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:15.304500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.304535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:15.304549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.304561 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:15.304573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:15.304607 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:15.304622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.304634 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:15.304671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.304683 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:15.304695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.304761 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:15.304773 | orchestrator | 2026-01-30 04:35:15.304786 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-01-30 04:35:15.304804 | orchestrator | Friday 30 January 2026 04:35:10 +0000 (0:00:01.015) 0:00:33.496 ******** 2026-01-30 04:35:15.304816 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:15.304827 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:15.304838 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:15.304849 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:15.304860 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:15.304871 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:15.304882 | orchestrator | 2026-01-30 04:35:15.304893 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-01-30 04:35:15.304904 | orchestrator | Friday 30 January 2026 04:35:11 +0000 (0:00:00.805) 0:00:34.301 ******** 2026-01-30 04:35:15.304915 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:15.304926 | orchestrator | 2026-01-30 04:35:15.304937 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-01-30 04:35:15.304949 | orchestrator | Friday 30 January 2026 04:35:11 +0000 (0:00:00.150) 0:00:34.452 ******** 2026-01-30 04:35:15.304960 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:15.304971 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:15.304982 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:15.304993 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:15.305004 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:15.305024 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:15.305035 | orchestrator | 2026-01-30 04:35:15.305046 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-30 04:35:15.305057 | orchestrator | Friday 30 January 2026 04:35:11 +0000 (0:00:00.617) 0:00:35.069 ******** 2026-01-30 04:35:15.305069 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:35:15.305082 | orchestrator | 2026-01-30 04:35:15.305093 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-01-30 04:35:15.305104 | orchestrator | Friday 30 January 2026 04:35:12 +0000 (0:00:01.220) 0:00:36.289 ******** 2026-01-30 04:35:15.305116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.305137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.905348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.905473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.905521 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.905571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:15.905592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:15.905612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:15.905675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:15.905722 | orchestrator | 2026-01-30 04:35:15.905745 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-01-30 04:35:15.905764 | orchestrator | Friday 30 January 2026 04:35:15 +0000 (0:00:02.295) 0:00:38.585 ******** 2026-01-30 04:35:15.905784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.905813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:15.905850 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:15.905871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.905891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:15.905910 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:15.905927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:15.905961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:17.709345 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:17.709414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709421 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:17.709436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709460 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:17.709464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709468 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:17.709472 | orchestrator | 2026-01-30 04:35:17.709477 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-01-30 04:35:17.709484 | orchestrator | Friday 30 January 2026 04:35:16 +0000 (0:00:00.934) 0:00:39.519 ******** 2026-01-30 04:35:17.709491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:17.709519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:17.709540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:17.709549 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:17.709553 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:17.709557 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:17.709561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709565 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:17.709569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:17.709573 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:17.709581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:25.105285 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:25.105390 | orchestrator | 2026-01-30 04:35:25.105406 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-01-30 04:35:25.105420 | orchestrator | Friday 30 January 2026 04:35:17 +0000 (0:00:01.469) 0:00:40.989 ******** 2026-01-30 04:35:25.105452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105508 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:25.105592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:25.105604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:25.105616 | orchestrator | 2026-01-30 04:35:25.105628 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-01-30 04:35:25.105640 | orchestrator | Friday 30 January 2026 04:35:20 +0000 (0:00:02.459) 0:00:43.448 ******** 2026-01-30 04:35:25.105652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105664 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:25.105689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.124607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.124794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.124822 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.124841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.124858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.124898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.124909 | orchestrator | 2026-01-30 04:35:34.124921 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-01-30 04:35:34.124933 | orchestrator | Friday 30 January 2026 04:35:25 +0000 (0:00:04.940) 0:00:48.388 ******** 2026-01-30 04:35:34.124960 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:34.124971 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:35:34.124980 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:35:34.124989 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:35:34.124997 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:35:34.125006 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:35:34.125014 | orchestrator | 2026-01-30 04:35:34.125023 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-01-30 04:35:34.125040 | orchestrator | Friday 30 January 2026 04:35:26 +0000 (0:00:01.453) 0:00:49.841 ******** 2026-01-30 04:35:34.125049 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:34.125058 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:34.125066 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:34.125075 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:34.125083 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:34.125092 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:34.125103 | orchestrator | 2026-01-30 04:35:34.125113 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-01-30 04:35:34.125124 | orchestrator | Friday 30 January 2026 04:35:27 +0000 (0:00:00.554) 0:00:50.396 ******** 2026-01-30 04:35:34.125133 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:34.125143 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:34.125153 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:34.125163 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:35:34.125173 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:35:34.125182 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:35:34.125192 | orchestrator | 2026-01-30 04:35:34.125202 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-01-30 04:35:34.125214 | orchestrator | Friday 30 January 2026 04:35:28 +0000 (0:00:01.547) 0:00:51.943 ******** 2026-01-30 04:35:34.125229 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:34.125252 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:34.125268 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:34.125283 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:35:34.125297 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:35:34.125312 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:35:34.125324 | orchestrator | 2026-01-30 04:35:34.125338 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-01-30 04:35:34.125352 | orchestrator | Friday 30 January 2026 04:35:30 +0000 (0:00:01.389) 0:00:53.333 ******** 2026-01-30 04:35:34.125365 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:35:34.125379 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:35:34.125393 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:35:34.125409 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:35:34.125424 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:35:34.125453 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:35:34.125469 | orchestrator | 2026-01-30 04:35:34.125485 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-01-30 04:35:34.125500 | orchestrator | Friday 30 January 2026 04:35:31 +0000 (0:00:01.556) 0:00:54.890 ******** 2026-01-30 04:35:34.125515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.125533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.125548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.125584 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.953695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.953905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:35:34.953969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.953993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.954082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:35:34.954101 | orchestrator | 2026-01-30 04:35:34.954115 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-01-30 04:35:34.954128 | orchestrator | Friday 30 January 2026 04:35:34 +0000 (0:00:02.511) 0:00:57.401 ******** 2026-01-30 04:35:34.954154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:34.954189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:34.954204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:34.954227 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:34.954242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:34.954256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:34.954268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:34.954281 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:34.954299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:34.954313 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:34.954324 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:34.954343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471344 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:38.471432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471444 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:38.471451 | orchestrator | 2026-01-30 04:35:38.471459 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-01-30 04:35:38.471467 | orchestrator | Friday 30 January 2026 04:35:34 +0000 (0:00:00.838) 0:00:58.239 ******** 2026-01-30 04:35:38.471473 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:38.471479 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:38.471486 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:38.471492 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:38.471498 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:38.471504 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:38.471511 | orchestrator | 2026-01-30 04:35:38.471517 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-01-30 04:35:38.471524 | orchestrator | Friday 30 January 2026 04:35:35 +0000 (0:00:00.811) 0:00:59.051 ******** 2026-01-30 04:35:38.471532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:38.471562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471569 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:35:38.471592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:38.471599 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:35:38.471618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 04:35:38.471632 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:35:38.471638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471645 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:35:38.471652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471658 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:35:38.471669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-01-30 04:35:38.471681 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:35:38.471688 | orchestrator | 2026-01-30 04:35:38.471694 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-01-30 04:35:38.471761 | orchestrator | Friday 30 January 2026 04:35:36 +0000 (0:00:00.858) 0:00:59.910 ******** 2026-01-30 04:35:38.471776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784175 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784227 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:10.784241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:36:10.784272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:36:10.784285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-01-30 04:36:10.784296 | orchestrator | 2026-01-30 04:36:10.784310 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-01-30 04:36:10.784322 | orchestrator | Friday 30 January 2026 04:35:38 +0000 (0:00:01.844) 0:01:01.754 ******** 2026-01-30 04:36:10.784334 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:10.784346 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:36:10.784357 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:36:10.784368 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:36:10.784379 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:36:10.784389 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:36:10.784402 | orchestrator | 2026-01-30 04:36:10.784421 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-01-30 04:36:10.784439 | orchestrator | Friday 30 January 2026 04:35:39 +0000 (0:00:00.573) 0:01:02.328 ******** 2026-01-30 04:36:10.784456 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:36:10.784474 | orchestrator | 2026-01-30 04:36:10.784492 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784508 | orchestrator | Friday 30 January 2026 04:35:43 +0000 (0:00:04.571) 0:01:06.900 ******** 2026-01-30 04:36:10.784525 | orchestrator | 2026-01-30 04:36:10.784556 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784577 | orchestrator | Friday 30 January 2026 04:35:43 +0000 (0:00:00.069) 0:01:06.969 ******** 2026-01-30 04:36:10.784594 | orchestrator | 2026-01-30 04:36:10.784613 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784632 | orchestrator | Friday 30 January 2026 04:35:43 +0000 (0:00:00.068) 0:01:07.037 ******** 2026-01-30 04:36:10.784651 | orchestrator | 2026-01-30 04:36:10.784672 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784693 | orchestrator | Friday 30 January 2026 04:35:43 +0000 (0:00:00.231) 0:01:07.269 ******** 2026-01-30 04:36:10.784766 | orchestrator | 2026-01-30 04:36:10.784788 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784807 | orchestrator | Friday 30 January 2026 04:35:44 +0000 (0:00:00.067) 0:01:07.336 ******** 2026-01-30 04:36:10.784826 | orchestrator | 2026-01-30 04:36:10.784846 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-01-30 04:36:10.784866 | orchestrator | Friday 30 January 2026 04:35:44 +0000 (0:00:00.065) 0:01:07.402 ******** 2026-01-30 04:36:10.784885 | orchestrator | 2026-01-30 04:36:10.784902 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-01-30 04:36:10.784916 | orchestrator | Friday 30 January 2026 04:35:44 +0000 (0:00:00.069) 0:01:07.472 ******** 2026-01-30 04:36:10.784927 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:36:10.784938 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:36:10.784949 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:36:10.784960 | orchestrator | 2026-01-30 04:36:10.784979 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-01-30 04:36:10.784990 | orchestrator | Friday 30 January 2026 04:35:49 +0000 (0:00:05.532) 0:01:13.004 ******** 2026-01-30 04:36:10.785001 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:36:10.785012 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:36:10.785023 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:36:10.785034 | orchestrator | 2026-01-30 04:36:10.785045 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-01-30 04:36:10.785056 | orchestrator | Friday 30 January 2026 04:35:59 +0000 (0:00:09.545) 0:01:22.550 ******** 2026-01-30 04:36:10.785067 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:36:10.785077 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:36:10.785088 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:36:10.785099 | orchestrator | 2026-01-30 04:36:10.785110 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:36:10.785122 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-30 04:36:10.785135 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 04:36:10.785158 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 04:36:11.173987 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-30 04:36:11.174165 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-30 04:36:11.174182 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-30 04:36:11.174195 | orchestrator | 2026-01-30 04:36:11.174207 | orchestrator | 2026-01-30 04:36:11.174219 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:36:11.174232 | orchestrator | Friday 30 January 2026 04:36:10 +0000 (0:00:11.505) 0:01:34.055 ******** 2026-01-30 04:36:11.174267 | orchestrator | =============================================================================== 2026-01-30 04:36:11.174279 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.51s 2026-01-30 04:36:11.174290 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.55s 2026-01-30 04:36:11.174302 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 5.53s 2026-01-30 04:36:11.174320 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.94s 2026-01-30 04:36:11.174337 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.57s 2026-01-30 04:36:11.174356 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.00s 2026-01-30 04:36:11.174375 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.74s 2026-01-30 04:36:11.174393 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.73s 2026-01-30 04:36:11.174410 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.24s 2026-01-30 04:36:11.174429 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.51s 2026-01-30 04:36:11.174447 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.46s 2026-01-30 04:36:11.174465 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.30s 2026-01-30 04:36:11.174484 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.84s 2026-01-30 04:36:11.174502 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.59s 2026-01-30 04:36:11.174522 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.57s 2026-01-30 04:36:11.174543 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.56s 2026-01-30 04:36:11.174563 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.55s 2026-01-30 04:36:11.174583 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.47s 2026-01-30 04:36:11.174598 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.45s 2026-01-30 04:36:11.174611 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.44s 2026-01-30 04:36:13.425297 | orchestrator | 2026-01-30 04:36:13 | INFO  | Task 5b1623f0-1c8b-4345-9bb3-260a61034d86 (aodh) was prepared for execution. 2026-01-30 04:36:13.425403 | orchestrator | 2026-01-30 04:36:13 | INFO  | It takes a moment until task 5b1623f0-1c8b-4345-9bb3-260a61034d86 (aodh) has been started and output is visible here. 2026-01-30 04:36:45.230302 | orchestrator | 2026-01-30 04:36:45.230439 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:36:45.230456 | orchestrator | 2026-01-30 04:36:45.230466 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:36:45.230476 | orchestrator | Friday 30 January 2026 04:36:17 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-01-30 04:36:45.230486 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:36:45.230497 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:36:45.230506 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:36:45.230515 | orchestrator | 2026-01-30 04:36:45.230541 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:36:45.230551 | orchestrator | Friday 30 January 2026 04:36:17 +0000 (0:00:00.255) 0:00:00.447 ******** 2026-01-30 04:36:45.230560 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-01-30 04:36:45.230570 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-01-30 04:36:45.230579 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-01-30 04:36:45.230588 | orchestrator | 2026-01-30 04:36:45.230597 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-01-30 04:36:45.230606 | orchestrator | 2026-01-30 04:36:45.230615 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-30 04:36:45.230624 | orchestrator | Friday 30 January 2026 04:36:17 +0000 (0:00:00.418) 0:00:00.865 ******** 2026-01-30 04:36:45.230658 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:36:45.230669 | orchestrator | 2026-01-30 04:36:45.230678 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-01-30 04:36:45.230687 | orchestrator | Friday 30 January 2026 04:36:18 +0000 (0:00:00.505) 0:00:01.371 ******** 2026-01-30 04:36:45.230696 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-01-30 04:36:45.230705 | orchestrator | 2026-01-30 04:36:45.230737 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-01-30 04:36:45.230746 | orchestrator | Friday 30 January 2026 04:36:21 +0000 (0:00:03.504) 0:00:04.875 ******** 2026-01-30 04:36:45.230754 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-01-30 04:36:45.230764 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-01-30 04:36:45.230773 | orchestrator | 2026-01-30 04:36:45.230781 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-01-30 04:36:45.230790 | orchestrator | Friday 30 January 2026 04:36:28 +0000 (0:00:06.669) 0:00:11.544 ******** 2026-01-30 04:36:45.230800 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:36:45.230812 | orchestrator | 2026-01-30 04:36:45.230822 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-01-30 04:36:45.230833 | orchestrator | Friday 30 January 2026 04:36:32 +0000 (0:00:03.426) 0:00:14.971 ******** 2026-01-30 04:36:45.230843 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:36:45.230853 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-01-30 04:36:45.230864 | orchestrator | 2026-01-30 04:36:45.230874 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-01-30 04:36:45.230884 | orchestrator | Friday 30 January 2026 04:36:36 +0000 (0:00:04.019) 0:00:18.990 ******** 2026-01-30 04:36:45.230894 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:36:45.230904 | orchestrator | 2026-01-30 04:36:45.230915 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-01-30 04:36:45.230925 | orchestrator | Friday 30 January 2026 04:36:39 +0000 (0:00:03.298) 0:00:22.289 ******** 2026-01-30 04:36:45.230935 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-01-30 04:36:45.230945 | orchestrator | 2026-01-30 04:36:45.230954 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-01-30 04:36:45.230963 | orchestrator | Friday 30 January 2026 04:36:43 +0000 (0:00:03.828) 0:00:26.117 ******** 2026-01-30 04:36:45.230976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:45.231016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:45.231034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:45.231046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:45.231057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:45.231066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:45.231076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:45.231092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:46.443595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:46.443792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:46.443812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:46.443825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:46.443837 | orchestrator | 2026-01-30 04:36:46.443851 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-01-30 04:36:46.443865 | orchestrator | Friday 30 January 2026 04:36:45 +0000 (0:00:02.018) 0:00:28.136 ******** 2026-01-30 04:36:46.443876 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:46.443890 | orchestrator | 2026-01-30 04:36:46.443901 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-01-30 04:36:46.443912 | orchestrator | Friday 30 January 2026 04:36:45 +0000 (0:00:00.118) 0:00:28.254 ******** 2026-01-30 04:36:46.443923 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:46.443935 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:36:46.443946 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:36:46.443957 | orchestrator | 2026-01-30 04:36:46.443968 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-01-30 04:36:46.443979 | orchestrator | Friday 30 January 2026 04:36:45 +0000 (0:00:00.495) 0:00:28.750 ******** 2026-01-30 04:36:46.443992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:46.444061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:46.444076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:46.444089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:46.444103 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:46.444116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:46.444129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:46.444150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:46.444173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:51.400244 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:36:51.400390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:51.400405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:51.400415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:51.400423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:51.400454 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:36:51.400462 | orchestrator | 2026-01-30 04:36:51.400470 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-30 04:36:51.400479 | orchestrator | Friday 30 January 2026 04:36:46 +0000 (0:00:00.606) 0:00:29.357 ******** 2026-01-30 04:36:51.400487 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:36:51.400496 | orchestrator | 2026-01-30 04:36:51.400503 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-01-30 04:36:51.400510 | orchestrator | Friday 30 January 2026 04:36:47 +0000 (0:00:00.647) 0:00:30.004 ******** 2026-01-30 04:36:51.400517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:51.400547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:51.400556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:51.400564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:51.400571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:51.400585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:51.400592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:51.400609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:52.000848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:52.000963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:52.000973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:52.001000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:52.001006 | orchestrator | 2026-01-30 04:36:52.001013 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-01-30 04:36:52.001020 | orchestrator | Friday 30 January 2026 04:36:51 +0000 (0:00:04.306) 0:00:34.310 ******** 2026-01-30 04:36:52.001032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:52.001057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:52.001083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.001092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.001099 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:52.001109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:52.001119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:52.001124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.001133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.001140 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:36:52.001154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:52.943380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:52.943574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943606 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:36:52.943620 | orchestrator | 2026-01-30 04:36:52.943633 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-01-30 04:36:52.943646 | orchestrator | Friday 30 January 2026 04:36:51 +0000 (0:00:00.602) 0:00:34.912 ******** 2026-01-30 04:36:52.943659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:52.943689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:52.943702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943816 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:36:52.943828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:52.943840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:52.943851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:52.943883 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:36:52.943906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-30 04:36:57.023588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 04:36:57.023771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 04:36:57.023786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 04:36:57.023797 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:36:57.023809 | orchestrator | 2026-01-30 04:36:57.023819 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-01-30 04:36:57.023830 | orchestrator | Friday 30 January 2026 04:36:52 +0000 (0:00:00.943) 0:00:35.856 ******** 2026-01-30 04:36:57.023840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:57.023875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:57.023906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:36:57.023923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:36:57.023996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.071776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.071907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.071923 | orchestrator | 2026-01-30 04:37:05.071936 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-01-30 04:37:05.071948 | orchestrator | Friday 30 January 2026 04:36:57 +0000 (0:00:04.077) 0:00:39.934 ******** 2026-01-30 04:37:05.071960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:05.071987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:05.072024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:05.072056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:05.072174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194485 | orchestrator | 2026-01-30 04:37:10.194503 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-01-30 04:37:10.194518 | orchestrator | Friday 30 January 2026 04:37:05 +0000 (0:00:08.051) 0:00:47.985 ******** 2026-01-30 04:37:10.194530 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:37:10.194543 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:37:10.194554 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:37:10.194566 | orchestrator | 2026-01-30 04:37:10.194577 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-01-30 04:37:10.194589 | orchestrator | Friday 30 January 2026 04:37:06 +0000 (0:00:01.791) 0:00:49.777 ******** 2026-01-30 04:37:10.194602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:10.194655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:10.194670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-30 04:37:10.194707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:37:10.194952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:38:00.974824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-01-30 04:38:00.974929 | orchestrator | 2026-01-30 04:38:00.974942 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-01-30 04:38:00.974953 | orchestrator | Friday 30 January 2026 04:37:10 +0000 (0:00:03.324) 0:00:53.101 ******** 2026-01-30 04:38:00.974962 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:38:00.974972 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:38:00.974981 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:38:00.974989 | orchestrator | 2026-01-30 04:38:00.974999 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-01-30 04:38:00.975008 | orchestrator | Friday 30 January 2026 04:37:10 +0000 (0:00:00.312) 0:00:53.414 ******** 2026-01-30 04:38:00.975017 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975025 | orchestrator | 2026-01-30 04:38:00.975034 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-01-30 04:38:00.975065 | orchestrator | Friday 30 January 2026 04:37:12 +0000 (0:00:02.264) 0:00:55.678 ******** 2026-01-30 04:38:00.975074 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975083 | orchestrator | 2026-01-30 04:38:00.975092 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-01-30 04:38:00.975101 | orchestrator | Friday 30 January 2026 04:37:15 +0000 (0:00:02.376) 0:00:58.055 ******** 2026-01-30 04:38:00.975109 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975118 | orchestrator | 2026-01-30 04:38:00.975127 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-30 04:38:00.975138 | orchestrator | Friday 30 January 2026 04:37:28 +0000 (0:00:13.621) 0:01:11.676 ******** 2026-01-30 04:38:00.975152 | orchestrator | 2026-01-30 04:38:00.975166 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-30 04:38:00.975180 | orchestrator | Friday 30 January 2026 04:37:28 +0000 (0:00:00.067) 0:01:11.744 ******** 2026-01-30 04:38:00.975194 | orchestrator | 2026-01-30 04:38:00.975209 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-01-30 04:38:00.975231 | orchestrator | Friday 30 January 2026 04:37:28 +0000 (0:00:00.067) 0:01:11.811 ******** 2026-01-30 04:38:00.975240 | orchestrator | 2026-01-30 04:38:00.975250 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-01-30 04:38:00.975259 | orchestrator | Friday 30 January 2026 04:37:29 +0000 (0:00:00.229) 0:01:12.041 ******** 2026-01-30 04:38:00.975268 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975277 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:38:00.975285 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:38:00.975294 | orchestrator | 2026-01-30 04:38:00.975303 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-01-30 04:38:00.975312 | orchestrator | Friday 30 January 2026 04:37:34 +0000 (0:00:05.794) 0:01:17.835 ******** 2026-01-30 04:38:00.975321 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:38:00.975331 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975341 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:38:00.975351 | orchestrator | 2026-01-30 04:38:00.975361 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-01-30 04:38:00.975371 | orchestrator | Friday 30 January 2026 04:37:45 +0000 (0:00:10.261) 0:01:28.096 ******** 2026-01-30 04:38:00.975381 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975391 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:38:00.975401 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:38:00.975410 | orchestrator | 2026-01-30 04:38:00.975421 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-01-30 04:38:00.975431 | orchestrator | Friday 30 January 2026 04:37:50 +0000 (0:00:05.397) 0:01:33.494 ******** 2026-01-30 04:38:00.975441 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:38:00.975450 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:38:00.975458 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:38:00.975467 | orchestrator | 2026-01-30 04:38:00.975475 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:38:00.975485 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:38:00.975496 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:38:00.975505 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:38:00.975513 | orchestrator | 2026-01-30 04:38:00.975522 | orchestrator | 2026-01-30 04:38:00.975530 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:38:00.975539 | orchestrator | Friday 30 January 2026 04:38:00 +0000 (0:00:10.087) 0:01:43.581 ******** 2026-01-30 04:38:00.975556 | orchestrator | =============================================================================== 2026-01-30 04:38:00.975565 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.62s 2026-01-30 04:38:00.975573 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 10.26s 2026-01-30 04:38:00.975598 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.09s 2026-01-30 04:38:00.975608 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.05s 2026-01-30 04:38:00.975616 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.67s 2026-01-30 04:38:00.975625 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.79s 2026-01-30 04:38:00.975634 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.40s 2026-01-30 04:38:00.975642 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.31s 2026-01-30 04:38:00.975651 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.08s 2026-01-30 04:38:00.975660 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.02s 2026-01-30 04:38:00.975668 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.83s 2026-01-30 04:38:00.975677 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.50s 2026-01-30 04:38:00.975685 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.43s 2026-01-30 04:38:00.975694 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.32s 2026-01-30 04:38:00.975702 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.30s 2026-01-30 04:38:00.975711 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.38s 2026-01-30 04:38:00.975720 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.26s 2026-01-30 04:38:00.975752 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.02s 2026-01-30 04:38:00.975761 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.79s 2026-01-30 04:38:00.975770 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 0.94s 2026-01-30 04:38:03.195841 | orchestrator | 2026-01-30 04:38:03 | INFO  | Task 73377220-809d-47ba-aeb4-cabe88fdafe4 (kolla-ceph-rgw) was prepared for execution. 2026-01-30 04:38:03.195950 | orchestrator | 2026-01-30 04:38:03 | INFO  | It takes a moment until task 73377220-809d-47ba-aeb4-cabe88fdafe4 (kolla-ceph-rgw) has been started and output is visible here. 2026-01-30 04:38:36.494435 | orchestrator | 2026-01-30 04:38:36.494557 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:38:36.494580 | orchestrator | 2026-01-30 04:38:36.494604 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:38:36.494631 | orchestrator | Friday 30 January 2026 04:38:07 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-01-30 04:38:36.494672 | orchestrator | ok: [testbed-manager] 2026-01-30 04:38:36.494692 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:38:36.494710 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:38:36.494793 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:38:36.494814 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:38:36.494834 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:38:36.494853 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:38:36.494872 | orchestrator | 2026-01-30 04:38:36.494884 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:38:36.494896 | orchestrator | Friday 30 January 2026 04:38:08 +0000 (0:00:00.817) 0:00:01.086 ******** 2026-01-30 04:38:36.494907 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.494920 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.494939 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.494960 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.495010 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.495030 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.495049 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-30 04:38:36.495069 | orchestrator | 2026-01-30 04:38:36.495083 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-30 04:38:36.495095 | orchestrator | 2026-01-30 04:38:36.495108 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-30 04:38:36.495120 | orchestrator | Friday 30 January 2026 04:38:08 +0000 (0:00:00.729) 0:00:01.815 ******** 2026-01-30 04:38:36.495133 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:38:36.495147 | orchestrator | 2026-01-30 04:38:36.495161 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-30 04:38:36.495174 | orchestrator | Friday 30 January 2026 04:38:10 +0000 (0:00:01.422) 0:00:03.238 ******** 2026-01-30 04:38:36.495186 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-30 04:38:36.495199 | orchestrator | 2026-01-30 04:38:36.495211 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-30 04:38:36.495223 | orchestrator | Friday 30 January 2026 04:38:13 +0000 (0:00:03.370) 0:00:06.609 ******** 2026-01-30 04:38:36.495236 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-30 04:38:36.495252 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-30 04:38:36.495264 | orchestrator | 2026-01-30 04:38:36.495277 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-30 04:38:36.495289 | orchestrator | Friday 30 January 2026 04:38:19 +0000 (0:00:05.856) 0:00:12.465 ******** 2026-01-30 04:38:36.495303 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-30 04:38:36.495315 | orchestrator | 2026-01-30 04:38:36.495327 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-30 04:38:36.495338 | orchestrator | Friday 30 January 2026 04:38:22 +0000 (0:00:02.942) 0:00:15.407 ******** 2026-01-30 04:38:36.495349 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:38:36.495360 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-30 04:38:36.495371 | orchestrator | 2026-01-30 04:38:36.495382 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-30 04:38:36.495393 | orchestrator | Friday 30 January 2026 04:38:25 +0000 (0:00:03.582) 0:00:18.990 ******** 2026-01-30 04:38:36.495404 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-30 04:38:36.495415 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-30 04:38:36.495426 | orchestrator | 2026-01-30 04:38:36.495436 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-30 04:38:36.495447 | orchestrator | Friday 30 January 2026 04:38:31 +0000 (0:00:05.715) 0:00:24.705 ******** 2026-01-30 04:38:36.495458 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-30 04:38:36.495469 | orchestrator | 2026-01-30 04:38:36.495515 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:38:36.495527 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495539 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495550 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495561 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495582 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495616 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495628 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:36.495639 | orchestrator | 2026-01-30 04:38:36.495650 | orchestrator | 2026-01-30 04:38:36.495661 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:38:36.495680 | orchestrator | Friday 30 January 2026 04:38:36 +0000 (0:00:04.443) 0:00:29.149 ******** 2026-01-30 04:38:36.495691 | orchestrator | =============================================================================== 2026-01-30 04:38:36.495702 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.86s 2026-01-30 04:38:36.495713 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.72s 2026-01-30 04:38:36.495724 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.44s 2026-01-30 04:38:36.495759 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.58s 2026-01-30 04:38:36.495770 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.37s 2026-01-30 04:38:36.495781 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.94s 2026-01-30 04:38:36.495792 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.42s 2026-01-30 04:38:36.495802 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2026-01-30 04:38:36.495813 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2026-01-30 04:38:38.728415 | orchestrator | 2026-01-30 04:38:38 | INFO  | Task 027087e6-ffe9-4c5f-8833-c7ff58409206 (gnocchi) was prepared for execution. 2026-01-30 04:38:38.728536 | orchestrator | 2026-01-30 04:38:38 | INFO  | It takes a moment until task 027087e6-ffe9-4c5f-8833-c7ff58409206 (gnocchi) has been started and output is visible here. 2026-01-30 04:38:43.393276 | orchestrator | 2026-01-30 04:38:43.393397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:38:43.393417 | orchestrator | 2026-01-30 04:38:43.393433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:38:43.393448 | orchestrator | Friday 30 January 2026 04:38:42 +0000 (0:00:00.250) 0:00:00.250 ******** 2026-01-30 04:38:43.393463 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:38:43.393479 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:38:43.393494 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:38:43.393509 | orchestrator | 2026-01-30 04:38:43.393524 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:38:43.393539 | orchestrator | Friday 30 January 2026 04:38:42 +0000 (0:00:00.292) 0:00:00.542 ******** 2026-01-30 04:38:43.393554 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-01-30 04:38:43.393569 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-01-30 04:38:43.393584 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-01-30 04:38:43.393599 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-01-30 04:38:43.393614 | orchestrator | 2026-01-30 04:38:43.393629 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-01-30 04:38:43.393644 | orchestrator | skipping: no hosts matched 2026-01-30 04:38:43.393660 | orchestrator | 2026-01-30 04:38:43.393675 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:38:43.393691 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:43.393807 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:43.393825 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:38:43.393838 | orchestrator | 2026-01-30 04:38:43.393852 | orchestrator | 2026-01-30 04:38:43.393865 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:38:43.393878 | orchestrator | Friday 30 January 2026 04:38:43 +0000 (0:00:00.244) 0:00:00.787 ******** 2026-01-30 04:38:43.393892 | orchestrator | =============================================================================== 2026-01-30 04:38:43.393905 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2026-01-30 04:38:43.393918 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.24s 2026-01-30 04:38:45.287896 | orchestrator | 2026-01-30 04:38:45 | INFO  | Task 385101b3-c8da-456b-9fa8-cdfdb4a71198 (manila) was prepared for execution. 2026-01-30 04:38:45.287982 | orchestrator | 2026-01-30 04:38:45 | INFO  | It takes a moment until task 385101b3-c8da-456b-9fa8-cdfdb4a71198 (manila) has been started and output is visible here. 2026-01-30 04:39:27.161468 | orchestrator | 2026-01-30 04:39:27.161565 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:39:27.161577 | orchestrator | 2026-01-30 04:39:27.161586 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:39:27.161594 | orchestrator | Friday 30 January 2026 04:38:48 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-01-30 04:39:27.161602 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:39:27.161610 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:39:27.161618 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:39:27.161626 | orchestrator | 2026-01-30 04:39:27.161633 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:39:27.161641 | orchestrator | Friday 30 January 2026 04:38:49 +0000 (0:00:00.221) 0:00:00.409 ******** 2026-01-30 04:39:27.161648 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-01-30 04:39:27.161656 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-01-30 04:39:27.161663 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-01-30 04:39:27.161671 | orchestrator | 2026-01-30 04:39:27.161678 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-01-30 04:39:27.161686 | orchestrator | 2026-01-30 04:39:27.161707 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-30 04:39:27.161714 | orchestrator | Friday 30 January 2026 04:38:49 +0000 (0:00:00.292) 0:00:00.702 ******** 2026-01-30 04:39:27.161722 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:39:27.161730 | orchestrator | 2026-01-30 04:39:27.161805 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-30 04:39:27.161813 | orchestrator | Friday 30 January 2026 04:38:49 +0000 (0:00:00.460) 0:00:01.162 ******** 2026-01-30 04:39:27.161820 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:27.161829 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:39:27.161837 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:39:27.161844 | orchestrator | 2026-01-30 04:39:27.161852 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-01-30 04:39:27.161859 | orchestrator | Friday 30 January 2026 04:38:50 +0000 (0:00:00.353) 0:00:01.516 ******** 2026-01-30 04:39:27.161866 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-01-30 04:39:27.161874 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-01-30 04:39:27.161881 | orchestrator | 2026-01-30 04:39:27.161888 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-01-30 04:39:27.161896 | orchestrator | Friday 30 January 2026 04:38:56 +0000 (0:00:06.867) 0:00:08.383 ******** 2026-01-30 04:39:27.161922 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-01-30 04:39:27.161931 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-01-30 04:39:27.161938 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-01-30 04:39:27.161946 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-01-30 04:39:27.161953 | orchestrator | 2026-01-30 04:39:27.161960 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-01-30 04:39:27.161968 | orchestrator | Friday 30 January 2026 04:39:10 +0000 (0:00:13.193) 0:00:21.577 ******** 2026-01-30 04:39:27.161975 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:39:27.161982 | orchestrator | 2026-01-30 04:39:27.161990 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-01-30 04:39:27.161997 | orchestrator | Friday 30 January 2026 04:39:13 +0000 (0:00:03.282) 0:00:24.859 ******** 2026-01-30 04:39:27.162004 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:39:27.162012 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-01-30 04:39:27.162084 | orchestrator | 2026-01-30 04:39:27.162098 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-01-30 04:39:27.162111 | orchestrator | Friday 30 January 2026 04:39:17 +0000 (0:00:04.113) 0:00:28.973 ******** 2026-01-30 04:39:27.162124 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:39:27.162136 | orchestrator | 2026-01-30 04:39:27.162145 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-01-30 04:39:27.162153 | orchestrator | Friday 30 January 2026 04:39:20 +0000 (0:00:03.394) 0:00:32.367 ******** 2026-01-30 04:39:27.162161 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-01-30 04:39:27.162169 | orchestrator | 2026-01-30 04:39:27.162178 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-01-30 04:39:27.162186 | orchestrator | Friday 30 January 2026 04:39:24 +0000 (0:00:04.034) 0:00:36.402 ******** 2026-01-30 04:39:27.162214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:27.162231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:27.162241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:27.162257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:27.162267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:27.162276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:27.162291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.062872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.063014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.063032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.063044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.063056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.063068 | orchestrator | 2026-01-30 04:39:38.063082 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-30 04:39:38.063095 | orchestrator | Friday 30 January 2026 04:39:27 +0000 (0:00:02.244) 0:00:38.647 ******** 2026-01-30 04:39:38.063107 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:39:38.063118 | orchestrator | 2026-01-30 04:39:38.063130 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-01-30 04:39:38.063141 | orchestrator | Friday 30 January 2026 04:39:27 +0000 (0:00:00.562) 0:00:39.210 ******** 2026-01-30 04:39:38.063152 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:39:38.063164 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:39:38.063175 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:39:38.063186 | orchestrator | 2026-01-30 04:39:38.063197 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-01-30 04:39:38.063208 | orchestrator | Friday 30 January 2026 04:39:28 +0000 (0:00:01.008) 0:00:40.218 ******** 2026-01-30 04:39:38.063220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063251 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063273 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063288 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063307 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063321 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063333 | orchestrator | 2026-01-30 04:39:38.063346 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-01-30 04:39:38.063359 | orchestrator | Friday 30 January 2026 04:39:30 +0000 (0:00:01.784) 0:00:42.003 ******** 2026-01-30 04:39:38.063372 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063385 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063410 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-01-30 04:39:38.063435 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-01-30 04:39:38.063448 | orchestrator | 2026-01-30 04:39:38.063461 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-01-30 04:39:38.063474 | orchestrator | Friday 30 January 2026 04:39:31 +0000 (0:00:01.302) 0:00:43.306 ******** 2026-01-30 04:39:38.063488 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-01-30 04:39:38.063499 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-01-30 04:39:38.063510 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-01-30 04:39:38.063521 | orchestrator | 2026-01-30 04:39:38.063658 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-01-30 04:39:38.063670 | orchestrator | Friday 30 January 2026 04:39:32 +0000 (0:00:00.765) 0:00:44.071 ******** 2026-01-30 04:39:38.063682 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:38.063693 | orchestrator | 2026-01-30 04:39:38.063704 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-01-30 04:39:38.063715 | orchestrator | Friday 30 January 2026 04:39:32 +0000 (0:00:00.134) 0:00:44.206 ******** 2026-01-30 04:39:38.063726 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:38.063755 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:39:38.063766 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:39:38.063777 | orchestrator | 2026-01-30 04:39:38.063788 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-01-30 04:39:38.063799 | orchestrator | Friday 30 January 2026 04:39:33 +0000 (0:00:00.580) 0:00:44.786 ******** 2026-01-30 04:39:38.063810 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:39:38.063829 | orchestrator | 2026-01-30 04:39:38.063840 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-01-30 04:39:38.063851 | orchestrator | Friday 30 January 2026 04:39:33 +0000 (0:00:00.596) 0:00:45.382 ******** 2026-01-30 04:39:38.063874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:38.890415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:38.890486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:38.890493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:38.890568 | orchestrator | 2026-01-30 04:39:38.890573 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-01-30 04:39:38.890579 | orchestrator | Friday 30 January 2026 04:39:38 +0000 (0:00:04.154) 0:00:49.537 ******** 2026-01-30 04:39:38.890587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:39.489991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490147 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:39.490158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:39.490191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490235 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:39:39.490243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:39.490251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:39.490291 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:39:39.490299 | orchestrator | 2026-01-30 04:39:39.490307 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-01-30 04:39:39.490316 | orchestrator | Friday 30 January 2026 04:39:38 +0000 (0:00:00.838) 0:00:50.375 ******** 2026-01-30 04:39:39.490334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:44.049939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050155 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:44.050166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:44.050174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050233 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:39:44.050245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:44.050268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:44.050292 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:39:44.050300 | orchestrator | 2026-01-30 04:39:44.050309 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-01-30 04:39:44.050319 | orchestrator | Friday 30 January 2026 04:39:39 +0000 (0:00:00.819) 0:00:51.195 ******** 2026-01-30 04:39:44.050344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:50.714395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:50.714533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:50.714552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:50.714703 | orchestrator | 2026-01-30 04:39:50.714716 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-01-30 04:39:50.714729 | orchestrator | Friday 30 January 2026 04:39:44 +0000 (0:00:04.572) 0:00:55.768 ******** 2026-01-30 04:39:50.714821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:54.695932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:54.696005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:39:54.696012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:54.696035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:54.696069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:54.696078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:39:54.696090 | orchestrator | 2026-01-30 04:39:54.696098 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-01-30 04:39:54.696103 | orchestrator | Friday 30 January 2026 04:39:50 +0000 (0:00:06.429) 0:01:02.198 ******** 2026-01-30 04:39:54.696108 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-01-30 04:39:54.696116 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-01-30 04:39:54.696120 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-01-30 04:39:54.696123 | orchestrator | 2026-01-30 04:39:54.696127 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-01-30 04:39:54.696131 | orchestrator | Friday 30 January 2026 04:39:54 +0000 (0:00:03.349) 0:01:05.547 ******** 2026-01-30 04:39:54.696140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:58.330977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331114 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:39:58.331144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:58.331178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331233 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:39:58.331244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-30 04:39:58.331254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 04:39:58.331302 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:39:58.331313 | orchestrator | 2026-01-30 04:39:58.331324 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-01-30 04:39:58.331335 | orchestrator | Friday 30 January 2026 04:39:54 +0000 (0:00:00.634) 0:01:06.182 ******** 2026-01-30 04:39:58.331354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:40:41.612310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:40:41.612450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-30 04:40:41.612470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-01-30 04:40:41.612697 | orchestrator | 2026-01-30 04:40:41.612710 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-01-30 04:40:41.612723 | orchestrator | Friday 30 January 2026 04:39:58 +0000 (0:00:03.637) 0:01:09.820 ******** 2026-01-30 04:40:41.612735 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:40:41.612787 | orchestrator | 2026-01-30 04:40:41.612807 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-01-30 04:40:41.612827 | orchestrator | Friday 30 January 2026 04:40:00 +0000 (0:00:02.289) 0:01:12.110 ******** 2026-01-30 04:40:41.612845 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:40:41.612864 | orchestrator | 2026-01-30 04:40:41.612878 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-01-30 04:40:41.612889 | orchestrator | Friday 30 January 2026 04:40:03 +0000 (0:00:02.480) 0:01:14.590 ******** 2026-01-30 04:40:41.612900 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:40:41.612911 | orchestrator | 2026-01-30 04:40:41.612922 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-30 04:40:41.612934 | orchestrator | Friday 30 January 2026 04:40:41 +0000 (0:00:38.157) 0:01:52.748 ******** 2026-01-30 04:40:41.612945 | orchestrator | 2026-01-30 04:40:41.612965 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-30 04:41:30.563468 | orchestrator | Friday 30 January 2026 04:40:41 +0000 (0:00:00.070) 0:01:52.818 ******** 2026-01-30 04:41:30.563593 | orchestrator | 2026-01-30 04:41:30.563620 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-01-30 04:41:30.563641 | orchestrator | Friday 30 January 2026 04:40:41 +0000 (0:00:00.093) 0:01:52.912 ******** 2026-01-30 04:41:30.563659 | orchestrator | 2026-01-30 04:41:30.563678 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-01-30 04:41:30.563697 | orchestrator | Friday 30 January 2026 04:40:41 +0000 (0:00:00.081) 0:01:52.993 ******** 2026-01-30 04:41:30.563716 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:41:30.563805 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:41:30.563828 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:41:30.563840 | orchestrator | 2026-01-30 04:41:30.563851 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-01-30 04:41:30.563863 | orchestrator | Friday 30 January 2026 04:40:56 +0000 (0:00:14.539) 0:02:07.532 ******** 2026-01-30 04:41:30.563904 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:41:30.563916 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:41:30.563927 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:41:30.563938 | orchestrator | 2026-01-30 04:41:30.563949 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-01-30 04:41:30.563960 | orchestrator | Friday 30 January 2026 04:41:06 +0000 (0:00:10.655) 0:02:18.188 ******** 2026-01-30 04:41:30.563971 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:41:30.563982 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:41:30.563993 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:41:30.564004 | orchestrator | 2026-01-30 04:41:30.564015 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-01-30 04:41:30.564026 | orchestrator | Friday 30 January 2026 04:41:17 +0000 (0:00:10.457) 0:02:28.645 ******** 2026-01-30 04:41:30.564036 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:41:30.564047 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:41:30.564058 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:41:30.564069 | orchestrator | 2026-01-30 04:41:30.564079 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:41:30.564092 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:41:30.564105 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:41:30.564115 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:41:30.564126 | orchestrator | 2026-01-30 04:41:30.564137 | orchestrator | 2026-01-30 04:41:30.564148 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:41:30.564159 | orchestrator | Friday 30 January 2026 04:41:30 +0000 (0:00:12.840) 0:02:41.486 ******** 2026-01-30 04:41:30.564170 | orchestrator | =============================================================================== 2026-01-30 04:41:30.564180 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.16s 2026-01-30 04:41:30.564191 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.54s 2026-01-30 04:41:30.564217 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.19s 2026-01-30 04:41:30.564228 | orchestrator | manila : Restart manila-share container -------------------------------- 12.84s 2026-01-30 04:41:30.564239 | orchestrator | manila : Restart manila-data container --------------------------------- 10.66s 2026-01-30 04:41:30.564249 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.46s 2026-01-30 04:41:30.564260 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.87s 2026-01-30 04:41:30.564271 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.43s 2026-01-30 04:41:30.564281 | orchestrator | manila : Copying over config.json files for services -------------------- 4.57s 2026-01-30 04:41:30.564292 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.15s 2026-01-30 04:41:30.564303 | orchestrator | service-ks-register : manila | Creating users --------------------------- 4.11s 2026-01-30 04:41:30.564314 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.03s 2026-01-30 04:41:30.564324 | orchestrator | manila : Check manila containers ---------------------------------------- 3.64s 2026-01-30 04:41:30.564335 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.39s 2026-01-30 04:41:30.564346 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.35s 2026-01-30 04:41:30.564357 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.28s 2026-01-30 04:41:30.564368 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.48s 2026-01-30 04:41:30.564378 | orchestrator | manila : Creating Manila database --------------------------------------- 2.29s 2026-01-30 04:41:30.564398 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.24s 2026-01-30 04:41:30.564409 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.79s 2026-01-30 04:41:30.839858 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-01-30 04:41:42.979624 | orchestrator | 2026-01-30 04:41:42 | INFO  | Task 5ddfb6b0-8958-4192-be93-52a4b7dc97ee (netdata) was prepared for execution. 2026-01-30 04:41:42.979855 | orchestrator | 2026-01-30 04:41:42 | INFO  | It takes a moment until task 5ddfb6b0-8958-4192-be93-52a4b7dc97ee (netdata) has been started and output is visible here. 2026-01-30 04:43:15.803256 | orchestrator | 2026-01-30 04:43:15.803349 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:43:15.803361 | orchestrator | 2026-01-30 04:43:15.803369 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:43:15.803378 | orchestrator | Friday 30 January 2026 04:41:47 +0000 (0:00:00.225) 0:00:00.225 ******** 2026-01-30 04:43:15.803385 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-30 04:43:15.803393 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-30 04:43:15.803400 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-30 04:43:15.803407 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-30 04:43:15.803414 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-30 04:43:15.803421 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-30 04:43:15.803428 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-30 04:43:15.803435 | orchestrator | 2026-01-30 04:43:15.803442 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-30 04:43:15.803449 | orchestrator | 2026-01-30 04:43:15.803456 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-30 04:43:15.803462 | orchestrator | Friday 30 January 2026 04:41:48 +0000 (0:00:00.845) 0:00:01.071 ******** 2026-01-30 04:43:15.803471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:43:15.803481 | orchestrator | 2026-01-30 04:43:15.803488 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-30 04:43:15.803495 | orchestrator | Friday 30 January 2026 04:41:49 +0000 (0:00:01.263) 0:00:02.335 ******** 2026-01-30 04:43:15.803502 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:15.803511 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:15.803519 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:15.803532 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:15.803544 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:15.803555 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:15.803567 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:15.803579 | orchestrator | 2026-01-30 04:43:15.803592 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-30 04:43:15.803603 | orchestrator | Friday 30 January 2026 04:41:51 +0000 (0:00:01.814) 0:00:04.149 ******** 2026-01-30 04:43:15.803613 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:15.803624 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:15.803635 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:15.803647 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:15.803660 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:15.803673 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:15.803686 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:15.803699 | orchestrator | 2026-01-30 04:43:15.803711 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-30 04:43:15.803723 | orchestrator | Friday 30 January 2026 04:41:53 +0000 (0:00:02.193) 0:00:06.343 ******** 2026-01-30 04:43:15.803780 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.803815 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:43:15.803827 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:43:15.803839 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:43:15.803850 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:43:15.803876 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:43:15.803887 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:43:15.803898 | orchestrator | 2026-01-30 04:43:15.803909 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-30 04:43:15.803921 | orchestrator | Friday 30 January 2026 04:41:54 +0000 (0:00:01.467) 0:00:07.810 ******** 2026-01-30 04:43:15.803932 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.803942 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:43:15.803953 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:43:15.803964 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:43:15.803975 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:43:15.803987 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:43:15.803998 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:43:15.804010 | orchestrator | 2026-01-30 04:43:15.804022 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-30 04:43:15.804034 | orchestrator | Friday 30 January 2026 04:42:11 +0000 (0:00:16.612) 0:00:24.423 ******** 2026-01-30 04:43:15.804046 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.804058 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:43:15.804070 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:43:15.804083 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:43:15.804095 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:43:15.804106 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:43:15.804118 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:43:15.804130 | orchestrator | 2026-01-30 04:43:15.804143 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-30 04:43:15.804155 | orchestrator | Friday 30 January 2026 04:42:50 +0000 (0:00:39.449) 0:01:03.872 ******** 2026-01-30 04:43:15.804168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:43:15.804182 | orchestrator | 2026-01-30 04:43:15.804193 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-30 04:43:15.804204 | orchestrator | Friday 30 January 2026 04:42:52 +0000 (0:00:01.450) 0:01:05.323 ******** 2026-01-30 04:43:15.804214 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-30 04:43:15.804225 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-30 04:43:15.804236 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-30 04:43:15.804247 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-30 04:43:15.804279 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-30 04:43:15.804291 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-30 04:43:15.804302 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-30 04:43:15.804312 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-30 04:43:15.804323 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-30 04:43:15.804334 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-30 04:43:15.804369 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-30 04:43:15.804379 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-30 04:43:15.804390 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-30 04:43:15.804400 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-30 04:43:15.804412 | orchestrator | 2026-01-30 04:43:15.804422 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-30 04:43:15.804434 | orchestrator | Friday 30 January 2026 04:42:55 +0000 (0:00:03.117) 0:01:08.440 ******** 2026-01-30 04:43:15.804461 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:15.804472 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:15.804483 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:15.804493 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:15.804504 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:15.804514 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:15.804525 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:15.804535 | orchestrator | 2026-01-30 04:43:15.804545 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-30 04:43:15.804557 | orchestrator | Friday 30 January 2026 04:42:56 +0000 (0:00:01.212) 0:01:09.652 ******** 2026-01-30 04:43:15.804567 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.804578 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:43:15.804590 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:43:15.804602 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:43:15.804613 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:43:15.804623 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:43:15.804634 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:43:15.804645 | orchestrator | 2026-01-30 04:43:15.804655 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-30 04:43:15.804667 | orchestrator | Friday 30 January 2026 04:42:57 +0000 (0:00:01.078) 0:01:10.731 ******** 2026-01-30 04:43:15.804678 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:15.804690 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:15.804701 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:15.804712 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:15.804747 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:15.804760 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:15.804770 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:15.804781 | orchestrator | 2026-01-30 04:43:15.804792 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-30 04:43:15.804804 | orchestrator | Friday 30 January 2026 04:42:58 +0000 (0:00:01.119) 0:01:11.851 ******** 2026-01-30 04:43:15.804815 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:15.804827 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:15.804839 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:15.804849 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:15.804861 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:15.804872 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:15.804882 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:15.804893 | orchestrator | 2026-01-30 04:43:15.804905 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-30 04:43:15.804917 | orchestrator | Friday 30 January 2026 04:43:00 +0000 (0:00:01.588) 0:01:13.439 ******** 2026-01-30 04:43:15.804938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-30 04:43:15.804954 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:43:15.804966 | orchestrator | 2026-01-30 04:43:15.804977 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-30 04:43:15.804988 | orchestrator | Friday 30 January 2026 04:43:01 +0000 (0:00:01.354) 0:01:14.793 ******** 2026-01-30 04:43:15.804999 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.805010 | orchestrator | 2026-01-30 04:43:15.805020 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-30 04:43:15.805031 | orchestrator | Friday 30 January 2026 04:43:03 +0000 (0:00:02.098) 0:01:16.891 ******** 2026-01-30 04:43:15.805043 | orchestrator | changed: [testbed-manager] 2026-01-30 04:43:15.805053 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:43:15.805065 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:43:15.805075 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:43:15.805086 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:43:15.805110 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:43:15.805122 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:43:15.805132 | orchestrator | 2026-01-30 04:43:15.805144 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:43:15.805155 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:15.805167 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:15.805177 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:15.805190 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:15.805216 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:16.165915 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:16.166011 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:43:16.166065 | orchestrator | 2026-01-30 04:43:16.166074 | orchestrator | 2026-01-30 04:43:16.166081 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:43:16.166089 | orchestrator | Friday 30 January 2026 04:43:15 +0000 (0:00:11.807) 0:01:28.699 ******** 2026-01-30 04:43:16.166096 | orchestrator | =============================================================================== 2026-01-30 04:43:16.166102 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.45s 2026-01-30 04:43:16.166109 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.61s 2026-01-30 04:43:16.166116 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.81s 2026-01-30 04:43:16.166128 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.12s 2026-01-30 04:43:16.166134 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.19s 2026-01-30 04:43:16.166141 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.10s 2026-01-30 04:43:16.166148 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.81s 2026-01-30 04:43:16.166154 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.59s 2026-01-30 04:43:16.166161 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.47s 2026-01-30 04:43:16.166168 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.45s 2026-01-30 04:43:16.166174 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.35s 2026-01-30 04:43:16.166181 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.26s 2026-01-30 04:43:16.166187 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2026-01-30 04:43:16.166195 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.12s 2026-01-30 04:43:16.166202 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.08s 2026-01-30 04:43:16.166209 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-01-30 04:43:18.551204 | orchestrator | 2026-01-30 04:43:18 | INFO  | Task 4d1be18f-2bb6-4dfa-855a-ce20d304e633 (prometheus) was prepared for execution. 2026-01-30 04:43:18.551326 | orchestrator | 2026-01-30 04:43:18 | INFO  | It takes a moment until task 4d1be18f-2bb6-4dfa-855a-ce20d304e633 (prometheus) has been started and output is visible here. 2026-01-30 04:43:27.590544 | orchestrator | 2026-01-30 04:43:27.590702 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:43:27.590814 | orchestrator | 2026-01-30 04:43:27.590851 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:43:27.590868 | orchestrator | Friday 30 January 2026 04:43:22 +0000 (0:00:00.262) 0:00:00.262 ******** 2026-01-30 04:43:27.590882 | orchestrator | ok: [testbed-manager] 2026-01-30 04:43:27.590898 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:43:27.590912 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:43:27.590921 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:43:27.590931 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:43:27.590939 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:43:27.590948 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:43:27.590956 | orchestrator | 2026-01-30 04:43:27.590965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:43:27.590974 | orchestrator | Friday 30 January 2026 04:43:23 +0000 (0:00:00.824) 0:00:01.087 ******** 2026-01-30 04:43:27.590983 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-30 04:43:27.590993 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591001 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591010 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591018 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591027 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591037 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-30 04:43:27.591052 | orchestrator | 2026-01-30 04:43:27.591066 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-30 04:43:27.591080 | orchestrator | 2026-01-30 04:43:27.591095 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-30 04:43:27.591110 | orchestrator | Friday 30 January 2026 04:43:24 +0000 (0:00:00.930) 0:00:02.018 ******** 2026-01-30 04:43:27.591125 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:43:27.591141 | orchestrator | 2026-01-30 04:43:27.591157 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-30 04:43:27.591171 | orchestrator | Friday 30 January 2026 04:43:25 +0000 (0:00:01.364) 0:00:03.382 ******** 2026-01-30 04:43:27.591190 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 04:43:27.591210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591302 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:27.591333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:27.591342 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:27.591352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:27.591369 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:27.591384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:28.621998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:28.622192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:28.622212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:28.622236 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622251 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 04:43:28.622310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:28.622332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622345 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622380 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:28.622401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:28.622437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491235 | orchestrator | 2026-01-30 04:43:33.491252 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-30 04:43:33.491265 | orchestrator | Friday 30 January 2026 04:43:28 +0000 (0:00:02.804) 0:00:06.187 ******** 2026-01-30 04:43:33.491279 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 04:43:33.491292 | orchestrator | 2026-01-30 04:43:33.491304 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-30 04:43:33.491315 | orchestrator | Friday 30 January 2026 04:43:30 +0000 (0:00:01.622) 0:00:07.809 ******** 2026-01-30 04:43:33.491328 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 04:43:33.491368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491477 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:33.491498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:33.491538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:33.491559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623413 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:35.623461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:35.623468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:35.623477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623527 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623536 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 04:43:35.623548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:35.623566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:35.623576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:35.623588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:36.590232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:36.590366 | orchestrator | 2026-01-30 04:43:36.590383 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-30 04:43:36.590397 | orchestrator | Friday 30 January 2026 04:43:35 +0000 (0:00:05.374) 0:00:13.184 ******** 2026-01-30 04:43:36.590410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-30 04:43:36.590423 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:36.590435 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:36.590497 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-30 04:43:36.590530 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:36.590553 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:43:36.590566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:36.590578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:36.590590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:36.590602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:36.590613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:36.590624 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:43:36.590641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:36.590653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:36.590679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:37.145339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:37.145448 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:43:37.145461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:37.145473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:37.145483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:37.145510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:37.145553 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:43:37.145581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:37.145592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145613 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:43:37.145623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:37.145633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:37.145660 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:43:37.145670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:37.145694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:38.080756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:38.080837 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:43:38.080845 | orchestrator | 2026-01-30 04:43:38.080852 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-30 04:43:38.080859 | orchestrator | Friday 30 January 2026 04:43:37 +0000 (0:00:01.521) 0:00:14.706 ******** 2026-01-30 04:43:38.080865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:38.080872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:38.080878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:38.080885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:38.080918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:38.080936 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-30 04:43:38.080943 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:38.080950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:38.080957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-30 04:43:38.080963 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:38.080976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:38.080981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:38.080992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:39.261358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:39.261475 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:43:39.261488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:39.261499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:39.261532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:39.261556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 04:43:39.261575 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:43:39.261584 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:43:39.261593 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:43:39.261648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:39.261660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261678 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:43:39.261686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:39.261710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:39.261852 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:43:39.261861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 04:43:39.261880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 04:43:42.721372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 04:43:42.721488 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:43:42.721506 | orchestrator | 2026-01-30 04:43:42.721518 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-30 04:43:42.721531 | orchestrator | Friday 30 January 2026 04:43:39 +0000 (0:00:02.111) 0:00:16.817 ******** 2026-01-30 04:43:42.721545 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 04:43:42.721585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721623 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721635 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721679 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721690 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:43:42.721701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:42.721755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:42.721775 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:42.721788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:42.721800 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:42.721820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490382 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 04:43:45.490423 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490583 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:43:45.490653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:45.490700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:43:49.614644 | orchestrator | 2026-01-30 04:43:49.614800 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-30 04:43:49.614841 | orchestrator | Friday 30 January 2026 04:43:45 +0000 (0:00:06.233) 0:00:23.050 ******** 2026-01-30 04:43:49.614852 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 04:43:49.614862 | orchestrator | 2026-01-30 04:43:49.614871 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-30 04:43:49.614880 | orchestrator | Friday 30 January 2026 04:43:46 +0000 (0:00:00.840) 0:00:23.891 ******** 2026-01-30 04:43:49.614891 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.614904 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.614928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.614939 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.614949 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:49.614958 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.614993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615003 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615012 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615021 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615035 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103773, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.472727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615044 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615053 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:49.615074 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.091816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.091944 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.091963 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.091986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.091997 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:51.092006 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1103802, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4758608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092036 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092062 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092073 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092082 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092096 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092106 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092115 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092131 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:51.092149 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174259 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174367 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174399 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174408 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174437 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174446 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174454 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174478 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174486 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174498 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1103764, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4721768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:52.174506 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174520 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174528 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174535 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:52.174549 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.497921 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498118 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498152 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498234 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498262 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498281 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103783, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4745593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:53.498300 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498342 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498368 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498403 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498423 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498444 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498465 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498485 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:53.498512 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523575 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523749 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523782 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523806 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523817 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523856 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523878 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523889 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523902 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103757, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4692655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:54.523913 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523924 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:54.523960 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751421 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751527 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751551 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751572 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751592 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751611 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751699 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751792 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751803 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751815 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751826 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103774, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4729235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:43:55.751837 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751864 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:55.751884 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690011 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690147 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690157 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690166 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690207 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690214 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690233 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690239 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690244 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690249 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690268 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:43:56.690275 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690280 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:43:56.690289 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:43:56.690299 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054176 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054295 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054305 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054342 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1103781, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4740121, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:05.054361 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054369 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:05.054379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054401 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054408 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054414 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054420 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:05.054432 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054439 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:05.054451 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-30 04:44:05.054458 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:05.054468 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103777, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4731193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:05.054481 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103771, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4722655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209058 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103795, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4756596, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209197 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103751, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4685075, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209226 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103821, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4784672, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209277 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103791, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.474853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209298 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103761, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4702654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209336 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1103755, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4686258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103780, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4738102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209399 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103779, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4733307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209421 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103814, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4783363, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-30 04:44:11.209443 | orchestrator | 2026-01-30 04:44:11.209465 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-30 04:44:11.209500 | orchestrator | Friday 30 January 2026 04:44:08 +0000 (0:00:22.282) 0:00:46.175 ******** 2026-01-30 04:44:11.209518 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 04:44:11.209540 | orchestrator | 2026-01-30 04:44:11.209559 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-30 04:44:11.209578 | orchestrator | Friday 30 January 2026 04:44:09 +0000 (0:00:00.709) 0:00:46.884 ******** 2026-01-30 04:44:11.209597 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.209619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209638 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.209657 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209676 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.209790 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.209814 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209834 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.209853 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209872 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.209888 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.209899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209910 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.209921 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209932 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.209943 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.209954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209964 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.209975 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.209986 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.209997 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.210007 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210087 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.210105 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210134 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.210153 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.210182 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210200 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.210212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210223 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.210234 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:11.210244 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210255 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-30 04:44:11.210266 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-30 04:44:11.210277 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-30 04:44:11.210288 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:44:11.210298 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 04:44:11.210309 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-30 04:44:11.210320 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-30 04:44:11.210331 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-30 04:44:11.210352 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-30 04:44:11.210364 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-30 04:44:11.210375 | orchestrator | 2026-01-30 04:44:11.210397 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-30 04:44:38.140516 | orchestrator | Friday 30 January 2026 04:44:11 +0000 (0:00:01.888) 0:00:48.772 ******** 2026-01-30 04:44:38.140662 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140748 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.140771 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140791 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.140811 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140830 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.140849 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140869 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.140888 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140908 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.140929 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-30 04:44:38.140949 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.140971 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-30 04:44:38.140989 | orchestrator | 2026-01-30 04:44:38.141010 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-30 04:44:38.141029 | orchestrator | Friday 30 January 2026 04:44:25 +0000 (0:00:14.477) 0:01:03.249 ******** 2026-01-30 04:44:38.141048 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141067 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.141086 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141105 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.141121 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141139 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.141151 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141162 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.141174 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141185 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.141198 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-30 04:44:38.141209 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.141220 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-30 04:44:38.141232 | orchestrator | 2026-01-30 04:44:38.141243 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-30 04:44:38.141254 | orchestrator | Friday 30 January 2026 04:44:28 +0000 (0:00:02.640) 0:01:05.890 ******** 2026-01-30 04:44:38.141265 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141277 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.141288 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141298 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.141308 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141344 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.141355 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141364 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.141388 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141399 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.141409 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-30 04:44:38.141421 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-30 04:44:38.141437 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.141454 | orchestrator | 2026-01-30 04:44:38.141470 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-30 04:44:38.141485 | orchestrator | Friday 30 January 2026 04:44:29 +0000 (0:00:01.408) 0:01:07.298 ******** 2026-01-30 04:44:38.141501 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 04:44:38.141517 | orchestrator | 2026-01-30 04:44:38.141534 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-30 04:44:38.141551 | orchestrator | Friday 30 January 2026 04:44:30 +0000 (0:00:00.628) 0:01:07.927 ******** 2026-01-30 04:44:38.141567 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:38.141584 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.141599 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.141616 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.141660 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.141678 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.141735 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.141752 | orchestrator | 2026-01-30 04:44:38.141767 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-30 04:44:38.141783 | orchestrator | Friday 30 January 2026 04:44:31 +0000 (0:00:00.676) 0:01:08.604 ******** 2026-01-30 04:44:38.141796 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:38.141806 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.141816 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.141825 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.141835 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:44:38.141845 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:44:38.141854 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:44:38.141864 | orchestrator | 2026-01-30 04:44:38.141873 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-30 04:44:38.141883 | orchestrator | Friday 30 January 2026 04:44:32 +0000 (0:00:01.910) 0:01:10.514 ******** 2026-01-30 04:44:38.141893 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.141903 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.141919 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.141936 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.141952 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.141968 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.141982 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.141998 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:38.142013 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.142092 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.142109 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.142144 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.142162 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-30 04:44:38.142179 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.142194 | orchestrator | 2026-01-30 04:44:38.142209 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-30 04:44:38.142225 | orchestrator | Friday 30 January 2026 04:44:34 +0000 (0:00:01.355) 0:01:11.870 ******** 2026-01-30 04:44:38.142241 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142258 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142275 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.142290 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.142306 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142323 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.142340 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142356 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.142373 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142389 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-30 04:44:38.142406 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.142421 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.142437 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-30 04:44:38.142452 | orchestrator | 2026-01-30 04:44:38.142468 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-30 04:44:38.142483 | orchestrator | Friday 30 January 2026 04:44:35 +0000 (0:00:01.382) 0:01:13.252 ******** 2026-01-30 04:44:38.142499 | orchestrator | [WARNING]: Skipped 2026-01-30 04:44:38.142529 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-30 04:44:38.142547 | orchestrator | due to this access issue: 2026-01-30 04:44:38.142563 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-30 04:44:38.142581 | orchestrator | not a directory 2026-01-30 04:44:38.142592 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 04:44:38.142602 | orchestrator | 2026-01-30 04:44:38.142612 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-30 04:44:38.142622 | orchestrator | Friday 30 January 2026 04:44:36 +0000 (0:00:01.053) 0:01:14.305 ******** 2026-01-30 04:44:38.142631 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:38.142641 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.142650 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.142660 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:38.142670 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:38.142740 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:38.142754 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:38.142764 | orchestrator | 2026-01-30 04:44:38.142774 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-30 04:44:38.142787 | orchestrator | Friday 30 January 2026 04:44:37 +0000 (0:00:00.906) 0:01:15.212 ******** 2026-01-30 04:44:38.142804 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:38.142821 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:44:38.142838 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:44:38.142872 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:44:41.046622 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:44:41.046801 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:44:41.046825 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:44:41.046841 | orchestrator | 2026-01-30 04:44:41.046857 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-30 04:44:41.046868 | orchestrator | Friday 30 January 2026 04:44:38 +0000 (0:00:00.982) 0:01:16.194 ******** 2026-01-30 04:44:41.046879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-30 04:44:41.046893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.046902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.046911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.046932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.047078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.047108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.047132 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-30 04:44:41.047168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:41.047178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:41.047188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:41.047202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:41.047231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:41.047249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:41.047285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.732902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733123 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-30 04:44:44.733156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-30 04:44:44.733204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 04:44:44.733254 | orchestrator | 2026-01-30 04:44:44.733267 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-30 04:44:44.733280 | orchestrator | Friday 30 January 2026 04:44:42 +0000 (0:00:04.289) 0:01:20.484 ******** 2026-01-30 04:44:44.733291 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-30 04:44:44.733302 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:44:44.733314 | orchestrator | 2026-01-30 04:44:44.733332 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883588 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:01.154) 0:01:21.639 ******** 2026-01-30 04:46:22.883805 | orchestrator | 2026-01-30 04:46:22.883825 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883839 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.232) 0:01:21.871 ******** 2026-01-30 04:46:22.883850 | orchestrator | 2026-01-30 04:46:22.883862 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883873 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.069) 0:01:21.941 ******** 2026-01-30 04:46:22.883884 | orchestrator | 2026-01-30 04:46:22.883895 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883906 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.067) 0:01:22.009 ******** 2026-01-30 04:46:22.883917 | orchestrator | 2026-01-30 04:46:22.883928 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883939 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.064) 0:01:22.074 ******** 2026-01-30 04:46:22.883950 | orchestrator | 2026-01-30 04:46:22.883961 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.883972 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.067) 0:01:22.141 ******** 2026-01-30 04:46:22.883983 | orchestrator | 2026-01-30 04:46:22.883994 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-30 04:46:22.884005 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.063) 0:01:22.205 ******** 2026-01-30 04:46:22.884016 | orchestrator | 2026-01-30 04:46:22.884027 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-30 04:46:22.884038 | orchestrator | Friday 30 January 2026 04:44:44 +0000 (0:00:00.088) 0:01:22.293 ******** 2026-01-30 04:46:22.884050 | orchestrator | changed: [testbed-manager] 2026-01-30 04:46:22.884061 | orchestrator | 2026-01-30 04:46:22.884072 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-30 04:46:22.884083 | orchestrator | Friday 30 January 2026 04:45:05 +0000 (0:00:21.028) 0:01:43.322 ******** 2026-01-30 04:46:22.884094 | orchestrator | changed: [testbed-manager] 2026-01-30 04:46:22.884106 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:46:22.884119 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:46:22.884131 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:46:22.884145 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:46:22.884157 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:46:22.884195 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:46:22.884208 | orchestrator | 2026-01-30 04:46:22.884221 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-30 04:46:22.884233 | orchestrator | Friday 30 January 2026 04:45:19 +0000 (0:00:13.272) 0:01:56.595 ******** 2026-01-30 04:46:22.884246 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:46:22.884258 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:46:22.884270 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:46:22.884282 | orchestrator | 2026-01-30 04:46:22.884295 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-30 04:46:22.884308 | orchestrator | Friday 30 January 2026 04:45:29 +0000 (0:00:10.679) 0:02:07.275 ******** 2026-01-30 04:46:22.884321 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:46:22.884333 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:46:22.884345 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:46:22.884357 | orchestrator | 2026-01-30 04:46:22.884370 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-30 04:46:22.884383 | orchestrator | Friday 30 January 2026 04:45:40 +0000 (0:00:10.614) 0:02:17.889 ******** 2026-01-30 04:46:22.884397 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:46:22.884415 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:46:22.884434 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:46:22.884458 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:46:22.884483 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:46:22.884501 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:46:22.884518 | orchestrator | changed: [testbed-manager] 2026-01-30 04:46:22.884536 | orchestrator | 2026-01-30 04:46:22.884569 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-30 04:46:22.884587 | orchestrator | Friday 30 January 2026 04:45:53 +0000 (0:00:13.280) 0:02:31.169 ******** 2026-01-30 04:46:22.884606 | orchestrator | changed: [testbed-manager] 2026-01-30 04:46:22.884624 | orchestrator | 2026-01-30 04:46:22.884686 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-30 04:46:22.884706 | orchestrator | Friday 30 January 2026 04:46:01 +0000 (0:00:07.841) 0:02:39.011 ******** 2026-01-30 04:46:22.884732 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:46:22.884753 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:46:22.884771 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:46:22.884788 | orchestrator | 2026-01-30 04:46:22.884806 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-30 04:46:22.884823 | orchestrator | Friday 30 January 2026 04:46:07 +0000 (0:00:05.655) 0:02:44.666 ******** 2026-01-30 04:46:22.884841 | orchestrator | changed: [testbed-manager] 2026-01-30 04:46:22.884859 | orchestrator | 2026-01-30 04:46:22.884876 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-30 04:46:22.884895 | orchestrator | Friday 30 January 2026 04:46:12 +0000 (0:00:05.260) 0:02:49.927 ******** 2026-01-30 04:46:22.884913 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:46:22.884930 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:46:22.884947 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:46:22.884965 | orchestrator | 2026-01-30 04:46:22.884984 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:46:22.885005 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-30 04:46:22.885025 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-30 04:46:22.885075 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-30 04:46:22.885097 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-30 04:46:22.885133 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:46:22.885145 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:46:22.885156 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-30 04:46:22.885167 | orchestrator | 2026-01-30 04:46:22.885178 | orchestrator | 2026-01-30 04:46:22.885189 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:46:22.885200 | orchestrator | Friday 30 January 2026 04:46:22 +0000 (0:00:09.924) 0:02:59.851 ******** 2026-01-30 04:46:22.885211 | orchestrator | =============================================================================== 2026-01-30 04:46:22.885222 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.28s 2026-01-30 04:46:22.885233 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.03s 2026-01-30 04:46:22.885244 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.48s 2026-01-30 04:46:22.885255 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.28s 2026-01-30 04:46:22.885266 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.27s 2026-01-30 04:46:22.885277 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.68s 2026-01-30 04:46:22.885288 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.61s 2026-01-30 04:46:22.885299 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.92s 2026-01-30 04:46:22.885309 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.84s 2026-01-30 04:46:22.885320 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.23s 2026-01-30 04:46:22.885331 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.66s 2026-01-30 04:46:22.885342 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.37s 2026-01-30 04:46:22.885353 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.26s 2026-01-30 04:46:22.885364 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.29s 2026-01-30 04:46:22.885374 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.80s 2026-01-30 04:46:22.885385 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.64s 2026-01-30 04:46:22.885396 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.11s 2026-01-30 04:46:22.885407 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.91s 2026-01-30 04:46:22.885417 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.89s 2026-01-30 04:46:22.885428 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.62s 2026-01-30 04:46:25.329885 | orchestrator | 2026-01-30 04:46:25 | INFO  | Task 622b08bb-0e9d-4876-a23a-cacedb6a64f3 (grafana) was prepared for execution. 2026-01-30 04:46:25.329970 | orchestrator | 2026-01-30 04:46:25 | INFO  | It takes a moment until task 622b08bb-0e9d-4876-a23a-cacedb6a64f3 (grafana) has been started and output is visible here. 2026-01-30 04:46:33.764257 | orchestrator | 2026-01-30 04:46:33.764377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:46:33.764388 | orchestrator | 2026-01-30 04:46:33.764396 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:46:33.764404 | orchestrator | Friday 30 January 2026 04:46:28 +0000 (0:00:00.199) 0:00:00.199 ******** 2026-01-30 04:46:33.764411 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:46:33.764419 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:46:33.764425 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:46:33.764455 | orchestrator | 2026-01-30 04:46:33.764462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:46:33.764469 | orchestrator | Friday 30 January 2026 04:46:28 +0000 (0:00:00.247) 0:00:00.447 ******** 2026-01-30 04:46:33.764475 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-30 04:46:33.764483 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-30 04:46:33.764489 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-30 04:46:33.764496 | orchestrator | 2026-01-30 04:46:33.764502 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-30 04:46:33.764508 | orchestrator | 2026-01-30 04:46:33.764515 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-30 04:46:33.764521 | orchestrator | Friday 30 January 2026 04:46:29 +0000 (0:00:00.350) 0:00:00.798 ******** 2026-01-30 04:46:33.764529 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:46:33.764537 | orchestrator | 2026-01-30 04:46:33.764543 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-30 04:46:33.764549 | orchestrator | Friday 30 January 2026 04:46:29 +0000 (0:00:00.443) 0:00:01.242 ******** 2026-01-30 04:46:33.764560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764585 | orchestrator | 2026-01-30 04:46:33.764591 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-30 04:46:33.764598 | orchestrator | Friday 30 January 2026 04:46:30 +0000 (0:00:00.876) 0:00:02.118 ******** 2026-01-30 04:46:33.764604 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-30 04:46:33.764612 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-30 04:46:33.764618 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:46:33.764625 | orchestrator | 2026-01-30 04:46:33.764671 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-30 04:46:33.764685 | orchestrator | Friday 30 January 2026 04:46:31 +0000 (0:00:00.737) 0:00:02.856 ******** 2026-01-30 04:46:33.764691 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:46:33.764698 | orchestrator | 2026-01-30 04:46:33.764718 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-30 04:46:33.764725 | orchestrator | Friday 30 January 2026 04:46:31 +0000 (0:00:00.539) 0:00:03.395 ******** 2026-01-30 04:46:33.764749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:33.764770 | orchestrator | 2026-01-30 04:46:33.764776 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-30 04:46:33.764782 | orchestrator | Friday 30 January 2026 04:46:33 +0000 (0:00:01.354) 0:00:04.750 ******** 2026-01-30 04:46:33.764789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:33.764796 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:46:33.764803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:33.764815 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:46:33.764833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:40.478822 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:46:40.478964 | orchestrator | 2026-01-30 04:46:40.478993 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-30 04:46:40.479016 | orchestrator | Friday 30 January 2026 04:46:33 +0000 (0:00:00.615) 0:00:05.366 ******** 2026-01-30 04:46:40.479039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:40.479064 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:46:40.479085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:40.479103 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:46:40.479115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-30 04:46:40.479126 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:46:40.479137 | orchestrator | 2026-01-30 04:46:40.479148 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-30 04:46:40.479159 | orchestrator | Friday 30 January 2026 04:46:34 +0000 (0:00:00.631) 0:00:05.998 ******** 2026-01-30 04:46:40.479171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479275 | orchestrator | 2026-01-30 04:46:40.479288 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-30 04:46:40.479301 | orchestrator | Friday 30 January 2026 04:46:35 +0000 (0:00:01.213) 0:00:07.211 ******** 2026-01-30 04:46:40.479313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:46:40.479360 | orchestrator | 2026-01-30 04:46:40.479373 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-30 04:46:40.479386 | orchestrator | Friday 30 January 2026 04:46:37 +0000 (0:00:01.630) 0:00:08.841 ******** 2026-01-30 04:46:40.479398 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:46:40.479411 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:46:40.479425 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:46:40.479437 | orchestrator | 2026-01-30 04:46:40.479449 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-30 04:46:40.479462 | orchestrator | Friday 30 January 2026 04:46:37 +0000 (0:00:00.318) 0:00:09.160 ******** 2026-01-30 04:46:40.479474 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-30 04:46:40.479488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-30 04:46:40.479500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-30 04:46:40.479512 | orchestrator | 2026-01-30 04:46:40.479525 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-30 04:46:40.479537 | orchestrator | Friday 30 January 2026 04:46:38 +0000 (0:00:01.221) 0:00:10.381 ******** 2026-01-30 04:46:40.479556 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-30 04:46:40.479570 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-30 04:46:40.479583 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-30 04:46:40.479596 | orchestrator | 2026-01-30 04:46:40.479608 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-30 04:46:40.479629 | orchestrator | Friday 30 January 2026 04:46:40 +0000 (0:00:01.699) 0:00:12.081 ******** 2026-01-30 04:46:46.933882 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:46:46.933959 | orchestrator | 2026-01-30 04:46:46.933966 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-30 04:46:46.933971 | orchestrator | Friday 30 January 2026 04:46:41 +0000 (0:00:00.719) 0:00:12.800 ******** 2026-01-30 04:46:46.933975 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-30 04:46:46.933981 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-30 04:46:46.933986 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:46:46.933990 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:46:46.933994 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:46:46.933998 | orchestrator | 2026-01-30 04:46:46.934002 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-30 04:46:46.934006 | orchestrator | Friday 30 January 2026 04:46:41 +0000 (0:00:00.690) 0:00:13.491 ******** 2026-01-30 04:46:46.934011 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:46:46.934045 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:46:46.934049 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:46:46.934053 | orchestrator | 2026-01-30 04:46:46.934057 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-30 04:46:46.934061 | orchestrator | Friday 30 January 2026 04:46:42 +0000 (0:00:00.337) 0:00:13.828 ******** 2026-01-30 04:46:46.934067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103534, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4152646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103534, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4152646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1103534, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4152646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103601, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.428771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103601, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.428771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1103601, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.428771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103554, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.418026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103554, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.418026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103554, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.418026, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103604, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4322855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103604, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4322855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:46.934181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1103604, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4322855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.607883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103574, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4222648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.607997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103574, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4222648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1103574, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4222648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103592, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4270964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103592, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4270964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1103592, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4270964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103531, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.414397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103531, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.414397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103531, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.414397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103545, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4162297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103545, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4162297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103545, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4162297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:50.608163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103559, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4182646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.808926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103559, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4182646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103559, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4182646, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103583, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103583, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1103583, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103597, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103597, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1103597, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103548, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4178336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103548, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4178336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103548, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4178336, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103590, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4262648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:54.809232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103590, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4262648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1103590, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4262648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103577, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103577, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1103577, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4242647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103570, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4219887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103570, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4219887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1103570, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4219887, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103567, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4202647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103567, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4202647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1103567, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4202647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103585, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4252648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103585, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4252648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:46:58.633321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1103585, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4252648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103561, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4192648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103561, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4192648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1103561, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4192648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103596, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103596, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1103596, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.427265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103736, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4667683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103736, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4667683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103736, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4667683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103639, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103639, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103628, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.437265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:02.782701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103639, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103628, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.437265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103667, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103628, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.437265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103667, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103615, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4343112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1103667, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103615, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4343112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103696, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4586349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103615, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4343112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103696, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4586349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103670, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4542654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103696, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4586349, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:06.663386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103670, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4542654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103701, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4592652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103670, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4542654, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103701, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4592652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103733, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4652655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1103701, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4592652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103733, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4652655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103690, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4562652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103733, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4652655, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103690, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4562652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103655, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4466975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1103690, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4562652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103655, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4466975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:10.581435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103636, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.440265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103655, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4466975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103636, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.440265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103652, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103636, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.440265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103631, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.439265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103652, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103652, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.445265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103658, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103631, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.439265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103631, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.439265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103720, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4648693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103658, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:14.795505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1103658, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.448265, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103711, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.461549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103720, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4648693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103720, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4648693, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103619, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.435344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103711, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.461549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103711, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.461549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103621, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4365277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103619, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.435344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103619, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.435344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103686, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4556286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103621, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4365277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103621, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4365277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:47:18.472774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103707, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4599183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:48:56.608490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103686, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4556286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:48:56.608688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103686, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4556286, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:48:56.608711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103707, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4599183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:48:56.608724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1103707, 'dev': 125, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769741416.4599183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-30 04:48:56.608738 | orchestrator | 2026-01-30 04:48:56.608751 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-30 04:48:56.608764 | orchestrator | Friday 30 January 2026 04:47:20 +0000 (0:00:38.122) 0:00:51.951 ******** 2026-01-30 04:48:56.608775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:48:56.608831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:48:56.608844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-30 04:48:56.608856 | orchestrator | 2026-01-30 04:48:56.608867 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-30 04:48:56.608884 | orchestrator | Friday 30 January 2026 04:47:21 +0000 (0:00:00.999) 0:00:52.951 ******** 2026-01-30 04:48:56.608895 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:48:56.608908 | orchestrator | 2026-01-30 04:48:56.608919 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-30 04:48:56.608930 | orchestrator | Friday 30 January 2026 04:47:23 +0000 (0:00:02.279) 0:00:55.230 ******** 2026-01-30 04:48:56.608940 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:48:56.608951 | orchestrator | 2026-01-30 04:48:56.608962 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-30 04:48:56.608973 | orchestrator | Friday 30 January 2026 04:47:25 +0000 (0:00:02.356) 0:00:57.586 ******** 2026-01-30 04:48:56.608984 | orchestrator | 2026-01-30 04:48:56.608994 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-30 04:48:56.609008 | orchestrator | Friday 30 January 2026 04:47:26 +0000 (0:00:00.083) 0:00:57.670 ******** 2026-01-30 04:48:56.609020 | orchestrator | 2026-01-30 04:48:56.609032 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-30 04:48:56.609044 | orchestrator | Friday 30 January 2026 04:47:26 +0000 (0:00:00.068) 0:00:57.739 ******** 2026-01-30 04:48:56.609056 | orchestrator | 2026-01-30 04:48:56.609069 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-30 04:48:56.609082 | orchestrator | Friday 30 January 2026 04:47:26 +0000 (0:00:00.068) 0:00:57.807 ******** 2026-01-30 04:48:56.609094 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:48:56.609107 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:48:56.609119 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:48:56.609130 | orchestrator | 2026-01-30 04:48:56.609141 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-30 04:48:56.609151 | orchestrator | Friday 30 January 2026 04:47:28 +0000 (0:00:02.137) 0:00:59.945 ******** 2026-01-30 04:48:56.609162 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:48:56.609173 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:48:56.609193 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-30 04:48:56.609205 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-30 04:48:56.609216 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-30 04:48:56.609227 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-01-30 04:48:56.609238 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:48:56.609250 | orchestrator | 2026-01-30 04:48:56.609260 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-30 04:48:56.609271 | orchestrator | Friday 30 January 2026 04:48:19 +0000 (0:00:50.863) 0:01:50.808 ******** 2026-01-30 04:48:56.609282 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:48:56.609293 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:48:56.609304 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:48:56.609315 | orchestrator | 2026-01-30 04:48:56.609326 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-30 04:48:56.609337 | orchestrator | Friday 30 January 2026 04:48:51 +0000 (0:00:32.044) 0:02:22.852 ******** 2026-01-30 04:48:56.609347 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:48:56.609358 | orchestrator | 2026-01-30 04:48:56.609369 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-30 04:48:56.609380 | orchestrator | Friday 30 January 2026 04:48:53 +0000 (0:00:02.303) 0:02:25.156 ******** 2026-01-30 04:48:56.609391 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:48:56.609402 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:48:56.609413 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:48:56.609423 | orchestrator | 2026-01-30 04:48:56.609434 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-30 04:48:56.609445 | orchestrator | Friday 30 January 2026 04:48:53 +0000 (0:00:00.302) 0:02:25.458 ******** 2026-01-30 04:48:56.609458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-30 04:48:56.609479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-30 04:48:57.189370 | orchestrator | 2026-01-30 04:48:57.189471 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-30 04:48:57.189487 | orchestrator | Friday 30 January 2026 04:48:56 +0000 (0:00:02.745) 0:02:28.203 ******** 2026-01-30 04:48:57.189499 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:48:57.189512 | orchestrator | 2026-01-30 04:48:57.189523 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:48:57.189536 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:48:57.189549 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:48:57.189560 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-30 04:48:57.189571 | orchestrator | 2026-01-30 04:48:57.189582 | orchestrator | 2026-01-30 04:48:57.189612 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:48:57.189623 | orchestrator | Friday 30 January 2026 04:48:56 +0000 (0:00:00.286) 0:02:28.490 ******** 2026-01-30 04:48:57.189706 | orchestrator | =============================================================================== 2026-01-30 04:48:57.189718 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.86s 2026-01-30 04:48:57.189729 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.12s 2026-01-30 04:48:57.189740 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.04s 2026-01-30 04:48:57.189751 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.75s 2026-01-30 04:48:57.189762 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.36s 2026-01-30 04:48:57.189773 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.30s 2026-01-30 04:48:57.189784 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.28s 2026-01-30 04:48:57.189795 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.14s 2026-01-30 04:48:57.189806 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.70s 2026-01-30 04:48:57.189817 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.63s 2026-01-30 04:48:57.189827 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2026-01-30 04:48:57.189838 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.22s 2026-01-30 04:48:57.189849 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.21s 2026-01-30 04:48:57.189860 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.00s 2026-01-30 04:48:57.189871 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.88s 2026-01-30 04:48:57.189882 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.74s 2026-01-30 04:48:57.189892 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.72s 2026-01-30 04:48:57.189903 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.69s 2026-01-30 04:48:57.189915 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.63s 2026-01-30 04:48:57.189928 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.62s 2026-01-30 04:48:57.519155 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-01-30 04:48:57.527037 | orchestrator | + set -e 2026-01-30 04:48:57.527126 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 04:48:57.527143 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 04:48:57.527156 | orchestrator | ++ INTERACTIVE=false 2026-01-30 04:48:57.527168 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 04:48:57.527179 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 04:48:57.527190 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 04:48:57.527201 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 04:48:57.527212 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 04:48:57.527223 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 04:48:57.527235 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 04:48:57.527246 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 04:48:57.527258 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 04:48:57.527270 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:48:57.527281 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:48:57.527292 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 04:48:57.527303 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 04:48:57.527315 | orchestrator | ++ export ARA=false 2026-01-30 04:48:57.527326 | orchestrator | ++ ARA=false 2026-01-30 04:48:57.527337 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 04:48:57.527348 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 04:48:57.527359 | orchestrator | ++ export TEMPEST=false 2026-01-30 04:48:57.527370 | orchestrator | ++ TEMPEST=false 2026-01-30 04:48:57.527380 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 04:48:57.527391 | orchestrator | ++ IS_ZUUL=true 2026-01-30 04:48:57.527402 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:48:57.527413 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:48:57.527424 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 04:48:57.527435 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 04:48:57.527446 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 04:48:57.527484 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 04:48:57.527496 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 04:48:57.527507 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 04:48:57.527518 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 04:48:57.527529 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 04:48:57.528355 | orchestrator | ++ semver 9.5.0 8.0.0 2026-01-30 04:48:57.596124 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 04:48:57.596386 | orchestrator | + osism apply clusterapi 2026-01-30 04:48:59.595174 | orchestrator | 2026-01-30 04:48:59 | INFO  | Task 5e13a791-86d5-42da-a115-73b570bcd053 (clusterapi) was prepared for execution. 2026-01-30 04:48:59.595267 | orchestrator | 2026-01-30 04:48:59 | INFO  | It takes a moment until task 5e13a791-86d5-42da-a115-73b570bcd053 (clusterapi) has been started and output is visible here. 2026-01-30 04:49:52.928750 | orchestrator | 2026-01-30 04:49:52.928832 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-01-30 04:49:52.928839 | orchestrator | 2026-01-30 04:49:52.928843 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-01-30 04:49:52.928850 | orchestrator | Friday 30 January 2026 04:49:03 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-01-30 04:49:52.928857 | orchestrator | included: cert_manager for testbed-manager 2026-01-30 04:49:52.928864 | orchestrator | 2026-01-30 04:49:52.928874 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-01-30 04:49:52.928881 | orchestrator | Friday 30 January 2026 04:49:03 +0000 (0:00:00.229) 0:00:00.414 ******** 2026-01-30 04:49:52.928887 | orchestrator | changed: [testbed-manager] 2026-01-30 04:49:52.928895 | orchestrator | 2026-01-30 04:49:52.928901 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-01-30 04:49:52.928908 | orchestrator | Friday 30 January 2026 04:49:09 +0000 (0:00:05.243) 0:00:05.658 ******** 2026-01-30 04:49:52.928914 | orchestrator | changed: [testbed-manager] 2026-01-30 04:49:52.928920 | orchestrator | 2026-01-30 04:49:52.928926 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-01-30 04:49:52.928931 | orchestrator | 2026-01-30 04:49:52.928937 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-01-30 04:49:52.928959 | orchestrator | Friday 30 January 2026 04:49:32 +0000 (0:00:23.113) 0:00:28.771 ******** 2026-01-30 04:49:52.928966 | orchestrator | ok: [testbed-manager] 2026-01-30 04:49:52.928973 | orchestrator | 2026-01-30 04:49:52.928978 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-01-30 04:49:52.928985 | orchestrator | Friday 30 January 2026 04:49:33 +0000 (0:00:01.043) 0:00:29.814 ******** 2026-01-30 04:49:52.928991 | orchestrator | ok: [testbed-manager] 2026-01-30 04:49:52.928997 | orchestrator | 2026-01-30 04:49:52.929003 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-01-30 04:49:52.929010 | orchestrator | Friday 30 January 2026 04:49:33 +0000 (0:00:00.130) 0:00:29.945 ******** 2026-01-30 04:49:52.929016 | orchestrator | ok: [testbed-manager] 2026-01-30 04:49:52.929022 | orchestrator | 2026-01-30 04:49:52.929028 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-01-30 04:49:52.929034 | orchestrator | Friday 30 January 2026 04:49:50 +0000 (0:00:16.748) 0:00:46.694 ******** 2026-01-30 04:49:52.929041 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:49:52.929047 | orchestrator | 2026-01-30 04:49:52.929054 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-01-30 04:49:52.929060 | orchestrator | Friday 30 January 2026 04:49:50 +0000 (0:00:00.130) 0:00:46.824 ******** 2026-01-30 04:49:52.929066 | orchestrator | changed: [testbed-manager] 2026-01-30 04:49:52.929073 | orchestrator | 2026-01-30 04:49:52.929079 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:49:52.929087 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 04:49:52.929094 | orchestrator | 2026-01-30 04:49:52.929100 | orchestrator | 2026-01-30 04:49:52.929106 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:49:52.929134 | orchestrator | Friday 30 January 2026 04:49:52 +0000 (0:00:02.293) 0:00:49.118 ******** 2026-01-30 04:49:52.929138 | orchestrator | =============================================================================== 2026-01-30 04:49:52.929142 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.11s 2026-01-30 04:49:52.929146 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.75s 2026-01-30 04:49:52.929150 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.24s 2026-01-30 04:49:52.929154 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.29s 2026-01-30 04:49:52.929157 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.04s 2026-01-30 04:49:52.929161 | orchestrator | Include cert_manager role ----------------------------------------------- 0.23s 2026-01-30 04:49:52.929165 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.13s 2026-01-30 04:49:52.929169 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.13s 2026-01-30 04:49:53.214210 | orchestrator | + osism apply magnum 2026-01-30 04:49:55.129162 | orchestrator | 2026-01-30 04:49:55 | INFO  | Task f6fd8984-66cb-4824-9f5d-8edda5d2e171 (magnum) was prepared for execution. 2026-01-30 04:49:55.129270 | orchestrator | 2026-01-30 04:49:55 | INFO  | It takes a moment until task f6fd8984-66cb-4824-9f5d-8edda5d2e171 (magnum) has been started and output is visible here. 2026-01-30 04:50:38.154879 | orchestrator | 2026-01-30 04:50:38.155046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:50:38.155067 | orchestrator | 2026-01-30 04:50:38.155079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:50:38.155092 | orchestrator | Friday 30 January 2026 04:49:58 +0000 (0:00:00.218) 0:00:00.218 ******** 2026-01-30 04:50:38.155103 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:50:38.155116 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:50:38.155127 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:50:38.155138 | orchestrator | 2026-01-30 04:50:38.155149 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:50:38.155160 | orchestrator | Friday 30 January 2026 04:49:59 +0000 (0:00:00.277) 0:00:00.496 ******** 2026-01-30 04:50:38.155171 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-30 04:50:38.155183 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-30 04:50:38.155194 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-30 04:50:38.155205 | orchestrator | 2026-01-30 04:50:38.155215 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-30 04:50:38.155227 | orchestrator | 2026-01-30 04:50:38.155238 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-30 04:50:38.155249 | orchestrator | Friday 30 January 2026 04:49:59 +0000 (0:00:00.307) 0:00:00.804 ******** 2026-01-30 04:50:38.155259 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:50:38.155271 | orchestrator | 2026-01-30 04:50:38.155283 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-30 04:50:38.155360 | orchestrator | Friday 30 January 2026 04:49:59 +0000 (0:00:00.447) 0:00:01.251 ******** 2026-01-30 04:50:38.155376 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-30 04:50:38.155387 | orchestrator | 2026-01-30 04:50:38.155398 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-30 04:50:38.155409 | orchestrator | Friday 30 January 2026 04:50:03 +0000 (0:00:03.663) 0:00:04.915 ******** 2026-01-30 04:50:38.155423 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-30 04:50:38.155437 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-30 04:50:38.155449 | orchestrator | 2026-01-30 04:50:38.155491 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-30 04:50:38.155518 | orchestrator | Friday 30 January 2026 04:50:10 +0000 (0:00:06.594) 0:00:11.510 ******** 2026-01-30 04:50:38.155530 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-30 04:50:38.155541 | orchestrator | 2026-01-30 04:50:38.155553 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-30 04:50:38.155564 | orchestrator | Friday 30 January 2026 04:50:13 +0000 (0:00:03.519) 0:00:15.029 ******** 2026-01-30 04:50:38.155574 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-30 04:50:38.155586 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-30 04:50:38.155597 | orchestrator | 2026-01-30 04:50:38.155608 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-30 04:50:38.155619 | orchestrator | Friday 30 January 2026 04:50:17 +0000 (0:00:04.143) 0:00:19.172 ******** 2026-01-30 04:50:38.155663 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-30 04:50:38.155675 | orchestrator | 2026-01-30 04:50:38.155686 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-30 04:50:38.155697 | orchestrator | Friday 30 January 2026 04:50:21 +0000 (0:00:03.368) 0:00:22.540 ******** 2026-01-30 04:50:38.155708 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-30 04:50:38.155718 | orchestrator | 2026-01-30 04:50:38.155729 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-30 04:50:38.155740 | orchestrator | Friday 30 January 2026 04:50:25 +0000 (0:00:03.921) 0:00:26.462 ******** 2026-01-30 04:50:38.155751 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:50:38.155762 | orchestrator | 2026-01-30 04:50:38.155772 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-30 04:50:38.155783 | orchestrator | Friday 30 January 2026 04:50:28 +0000 (0:00:03.556) 0:00:30.019 ******** 2026-01-30 04:50:38.155794 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:50:38.155805 | orchestrator | 2026-01-30 04:50:38.155816 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-30 04:50:38.155827 | orchestrator | Friday 30 January 2026 04:50:32 +0000 (0:00:04.195) 0:00:34.214 ******** 2026-01-30 04:50:38.155838 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:50:38.155849 | orchestrator | 2026-01-30 04:50:38.155859 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-30 04:50:38.155870 | orchestrator | Friday 30 January 2026 04:50:36 +0000 (0:00:03.744) 0:00:37.958 ******** 2026-01-30 04:50:38.155906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:38.155922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:38.155949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:38.155962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:38.155975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:38.155995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:45.354967 | orchestrator | 2026-01-30 04:50:45.355051 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-30 04:50:45.355061 | orchestrator | Friday 30 January 2026 04:50:38 +0000 (0:00:01.538) 0:00:39.497 ******** 2026-01-30 04:50:45.355069 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:50:45.355077 | orchestrator | 2026-01-30 04:50:45.355084 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-30 04:50:45.355109 | orchestrator | Friday 30 January 2026 04:50:38 +0000 (0:00:00.135) 0:00:39.632 ******** 2026-01-30 04:50:45.355115 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:50:45.355122 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:50:45.355128 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:50:45.355134 | orchestrator | 2026-01-30 04:50:45.355141 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-30 04:50:45.355147 | orchestrator | Friday 30 January 2026 04:50:38 +0000 (0:00:00.307) 0:00:39.939 ******** 2026-01-30 04:50:45.355154 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 04:50:45.355160 | orchestrator | 2026-01-30 04:50:45.355167 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-30 04:50:45.355173 | orchestrator | Friday 30 January 2026 04:50:39 +0000 (0:00:00.795) 0:00:40.735 ******** 2026-01-30 04:50:45.355193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:45.355202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:45.355210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:45.355230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:45.355243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:45.355253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:45.355260 | orchestrator | 2026-01-30 04:50:45.355267 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-30 04:50:45.355273 | orchestrator | Friday 30 January 2026 04:50:41 +0000 (0:00:02.393) 0:00:43.128 ******** 2026-01-30 04:50:45.355279 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:50:45.355286 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:50:45.355293 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:50:45.355299 | orchestrator | 2026-01-30 04:50:45.355305 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-30 04:50:45.355311 | orchestrator | Friday 30 January 2026 04:50:42 +0000 (0:00:00.449) 0:00:43.578 ******** 2026-01-30 04:50:45.355318 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:50:45.355325 | orchestrator | 2026-01-30 04:50:45.355331 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-30 04:50:45.355337 | orchestrator | Friday 30 January 2026 04:50:42 +0000 (0:00:00.537) 0:00:44.116 ******** 2026-01-30 04:50:45.355344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:45.355356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:46.232478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:46.232588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:46.232599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:46.232606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:46.232672 | orchestrator | 2026-01-30 04:50:46.232682 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-30 04:50:46.232689 | orchestrator | Friday 30 January 2026 04:50:45 +0000 (0:00:02.595) 0:00:46.711 ******** 2026-01-30 04:50:46.232712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:46.232719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:46.232726 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:50:46.232739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:46.232746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:46.232753 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:50:46.232761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:46.232777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:49.764182 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:50:49.764304 | orchestrator | 2026-01-30 04:50:49.764327 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-30 04:50:49.764344 | orchestrator | Friday 30 January 2026 04:50:46 +0000 (0:00:00.868) 0:00:47.579 ******** 2026-01-30 04:50:49.764363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:49.764401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:49.764418 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:50:49.764434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:49.764576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:49.764599 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:50:49.764713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:49.764747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:49.764764 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:50:49.764780 | orchestrator | 2026-01-30 04:50:49.764797 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-30 04:50:49.764813 | orchestrator | Friday 30 January 2026 04:50:47 +0000 (0:00:00.870) 0:00:48.450 ******** 2026-01-30 04:50:49.764829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:49.764858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:49.764889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:55.843518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843764 | orchestrator | 2026-01-30 04:50:55.843779 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-30 04:50:55.843792 | orchestrator | Friday 30 January 2026 04:50:49 +0000 (0:00:02.670) 0:00:51.120 ******** 2026-01-30 04:50:55.843804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:55.843838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:55.843851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:55.843868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:55.843911 | orchestrator | 2026-01-30 04:50:55.843923 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-30 04:50:55.843934 | orchestrator | Friday 30 January 2026 04:50:55 +0000 (0:00:05.428) 0:00:56.548 ******** 2026-01-30 04:50:55.843969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:57.712184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:57.712278 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:50:57.712292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:57.712328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:57.712343 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:50:57.712353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-30 04:50:57.712378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 04:50:57.712389 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:50:57.712399 | orchestrator | 2026-01-30 04:50:57.712409 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-30 04:50:57.712419 | orchestrator | Friday 30 January 2026 04:50:55 +0000 (0:00:00.652) 0:00:57.201 ******** 2026-01-30 04:50:57.712436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:57.712454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:57.712464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-30 04:50:57.712473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:50:57.712493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:51:49.745485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-30 04:51:49.745719 | orchestrator | 2026-01-30 04:51:49.745743 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-30 04:51:49.745757 | orchestrator | Friday 30 January 2026 04:50:57 +0000 (0:00:01.865) 0:00:59.067 ******** 2026-01-30 04:51:49.745768 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:51:49.745781 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:51:49.745792 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:51:49.745803 | orchestrator | 2026-01-30 04:51:49.745814 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-30 04:51:49.745825 | orchestrator | Friday 30 January 2026 04:50:58 +0000 (0:00:00.537) 0:00:59.604 ******** 2026-01-30 04:51:49.745837 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:51:49.745847 | orchestrator | 2026-01-30 04:51:49.745858 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-30 04:51:49.745869 | orchestrator | Friday 30 January 2026 04:51:00 +0000 (0:00:02.310) 0:01:01.915 ******** 2026-01-30 04:51:49.745880 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:51:49.745891 | orchestrator | 2026-01-30 04:51:49.745902 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-30 04:51:49.745913 | orchestrator | Friday 30 January 2026 04:51:03 +0000 (0:00:02.497) 0:01:04.412 ******** 2026-01-30 04:51:49.745925 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:51:49.745935 | orchestrator | 2026-01-30 04:51:49.745946 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-30 04:51:49.745957 | orchestrator | Friday 30 January 2026 04:51:19 +0000 (0:00:16.636) 0:01:21.049 ******** 2026-01-30 04:51:49.745968 | orchestrator | 2026-01-30 04:51:49.745980 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-30 04:51:49.745992 | orchestrator | Friday 30 January 2026 04:51:19 +0000 (0:00:00.069) 0:01:21.118 ******** 2026-01-30 04:51:49.746004 | orchestrator | 2026-01-30 04:51:49.746074 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-30 04:51:49.746089 | orchestrator | Friday 30 January 2026 04:51:19 +0000 (0:00:00.068) 0:01:21.186 ******** 2026-01-30 04:51:49.746100 | orchestrator | 2026-01-30 04:51:49.746112 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-30 04:51:49.746124 | orchestrator | Friday 30 January 2026 04:51:19 +0000 (0:00:00.069) 0:01:21.256 ******** 2026-01-30 04:51:49.746136 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:51:49.746148 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:51:49.746160 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:51:49.746172 | orchestrator | 2026-01-30 04:51:49.746183 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-30 04:51:49.746195 | orchestrator | Friday 30 January 2026 04:51:39 +0000 (0:00:19.322) 0:01:40.579 ******** 2026-01-30 04:51:49.746207 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:51:49.746219 | orchestrator | changed: [testbed-node-2] 2026-01-30 04:51:49.746231 | orchestrator | changed: [testbed-node-1] 2026-01-30 04:51:49.746243 | orchestrator | 2026-01-30 04:51:49.746254 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:51:49.746267 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 04:51:49.746282 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:51:49.746294 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-30 04:51:49.746315 | orchestrator | 2026-01-30 04:51:49.746328 | orchestrator | 2026-01-30 04:51:49.746341 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:51:49.746353 | orchestrator | Friday 30 January 2026 04:51:49 +0000 (0:00:10.184) 0:01:50.764 ******** 2026-01-30 04:51:49.746364 | orchestrator | =============================================================================== 2026-01-30 04:51:49.746375 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.32s 2026-01-30 04:51:49.746387 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.64s 2026-01-30 04:51:49.746398 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.19s 2026-01-30 04:51:49.746409 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.59s 2026-01-30 04:51:49.746420 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.43s 2026-01-30 04:51:49.746431 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.20s 2026-01-30 04:51:49.746442 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.14s 2026-01-30 04:51:49.746473 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.92s 2026-01-30 04:51:49.746484 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.74s 2026-01-30 04:51:49.746494 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.66s 2026-01-30 04:51:49.746505 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.56s 2026-01-30 04:51:49.746515 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.52s 2026-01-30 04:51:49.746535 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.37s 2026-01-30 04:51:49.746545 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.67s 2026-01-30 04:51:49.746555 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.60s 2026-01-30 04:51:49.746566 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.50s 2026-01-30 04:51:49.746577 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2026-01-30 04:51:49.746587 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.31s 2026-01-30 04:51:49.746598 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.87s 2026-01-30 04:51:49.746609 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.54s 2026-01-30 04:51:50.426010 | orchestrator | ok: Runtime: 1:39:49.246493 2026-01-30 04:51:50.674640 | 2026-01-30 04:51:50.674806 | TASK [Deploy in a nutshell] 2026-01-30 04:51:51.208815 | orchestrator | skipping: Conditional result was False 2026-01-30 04:51:51.231228 | 2026-01-30 04:51:51.231387 | TASK [Bootstrap services] 2026-01-30 04:51:51.910159 | orchestrator | 2026-01-30 04:51:51.910369 | orchestrator | # BOOTSTRAP 2026-01-30 04:51:51.910413 | orchestrator | 2026-01-30 04:51:51.910440 | orchestrator | + set -e 2026-01-30 04:51:51.910461 | orchestrator | + echo 2026-01-30 04:51:51.910483 | orchestrator | + echo '# BOOTSTRAP' 2026-01-30 04:51:51.910511 | orchestrator | + echo 2026-01-30 04:51:51.910590 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-30 04:51:51.919205 | orchestrator | + set -e 2026-01-30 04:51:51.919335 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-30 04:51:54.006979 | orchestrator | 2026-01-30 04:51:54 | INFO  | It takes a moment until task ad67e4b6-f5e6-4697-b4d2-cd7d3ae81179 (flavor-manager) has been started and output is visible here. 2026-01-30 04:52:01.718767 | orchestrator | 2026-01-30 04:51:56 | INFO  | Flavor SCS-1L-1 created 2026-01-30 04:52:01.718863 | orchestrator | 2026-01-30 04:51:56 | INFO  | Flavor SCS-1L-1-5 created 2026-01-30 04:52:01.718874 | orchestrator | 2026-01-30 04:51:57 | INFO  | Flavor SCS-1V-2 created 2026-01-30 04:52:01.718881 | orchestrator | 2026-01-30 04:51:57 | INFO  | Flavor SCS-1V-2-5 created 2026-01-30 04:52:01.718888 | orchestrator | 2026-01-30 04:51:57 | INFO  | Flavor SCS-1V-4 created 2026-01-30 04:52:01.718894 | orchestrator | 2026-01-30 04:51:57 | INFO  | Flavor SCS-1V-4-10 created 2026-01-30 04:52:01.718900 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-1V-8 created 2026-01-30 04:52:01.718906 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-1V-8-20 created 2026-01-30 04:52:01.718918 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-2V-4 created 2026-01-30 04:52:01.718923 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-2V-4-10 created 2026-01-30 04:52:01.718927 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-2V-8 created 2026-01-30 04:52:01.718932 | orchestrator | 2026-01-30 04:51:58 | INFO  | Flavor SCS-2V-8-20 created 2026-01-30 04:52:01.718936 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-2V-16 created 2026-01-30 04:52:01.718941 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-2V-16-50 created 2026-01-30 04:52:01.718945 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-4V-8 created 2026-01-30 04:52:01.718950 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-4V-8-20 created 2026-01-30 04:52:01.718955 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-4V-16 created 2026-01-30 04:52:01.718959 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-4V-16-50 created 2026-01-30 04:52:01.718964 | orchestrator | 2026-01-30 04:51:59 | INFO  | Flavor SCS-4V-32 created 2026-01-30 04:52:01.718968 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-4V-32-100 created 2026-01-30 04:52:01.718973 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-8V-16 created 2026-01-30 04:52:01.718977 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-8V-16-50 created 2026-01-30 04:52:01.718982 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-8V-32 created 2026-01-30 04:52:01.718988 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-8V-32-100 created 2026-01-30 04:52:01.718996 | orchestrator | 2026-01-30 04:52:00 | INFO  | Flavor SCS-16V-32 created 2026-01-30 04:52:01.719004 | orchestrator | 2026-01-30 04:52:01 | INFO  | Flavor SCS-16V-32-100 created 2026-01-30 04:52:01.719010 | orchestrator | 2026-01-30 04:52:01 | INFO  | Flavor SCS-2V-4-20s created 2026-01-30 04:52:01.719018 | orchestrator | 2026-01-30 04:52:01 | INFO  | Flavor SCS-4V-8-50s created 2026-01-30 04:52:01.719025 | orchestrator | 2026-01-30 04:52:01 | INFO  | Flavor SCS-8V-32-100s created 2026-01-30 04:52:03.930370 | orchestrator | 2026-01-30 04:52:03 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-30 04:52:14.032028 | orchestrator | 2026-01-30 04:52:14 | INFO  | Task 966e40fb-fdbf-43d2-8f69-505b17e8d6ef (bootstrap-basic) was prepared for execution. 2026-01-30 04:52:14.032119 | orchestrator | 2026-01-30 04:52:14 | INFO  | It takes a moment until task 966e40fb-fdbf-43d2-8f69-505b17e8d6ef (bootstrap-basic) has been started and output is visible here. 2026-01-30 04:52:54.673714 | orchestrator | 2026-01-30 04:52:54.673846 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-30 04:52:54.673867 | orchestrator | 2026-01-30 04:52:54.673885 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 04:52:54.673902 | orchestrator | Friday 30 January 2026 04:52:18 +0000 (0:00:00.064) 0:00:00.064 ******** 2026-01-30 04:52:54.673919 | orchestrator | ok: [localhost] 2026-01-30 04:52:54.673937 | orchestrator | 2026-01-30 04:52:54.673953 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-30 04:52:54.673970 | orchestrator | Friday 30 January 2026 04:52:20 +0000 (0:00:01.811) 0:00:01.876 ******** 2026-01-30 04:52:54.673986 | orchestrator | ok: [localhost] 2026-01-30 04:52:54.674001 | orchestrator | 2026-01-30 04:52:54.674066 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-30 04:52:54.674085 | orchestrator | Friday 30 January 2026 04:52:26 +0000 (0:00:06.557) 0:00:08.433 ******** 2026-01-30 04:52:54.674134 | orchestrator | changed: [localhost] 2026-01-30 04:52:54.674151 | orchestrator | 2026-01-30 04:52:54.674167 | orchestrator | TASK [Create public network] *************************************************** 2026-01-30 04:52:54.674184 | orchestrator | Friday 30 January 2026 04:52:32 +0000 (0:00:05.932) 0:00:14.366 ******** 2026-01-30 04:52:54.674200 | orchestrator | changed: [localhost] 2026-01-30 04:52:54.674217 | orchestrator | 2026-01-30 04:52:54.674235 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-30 04:52:54.674252 | orchestrator | Friday 30 January 2026 04:52:37 +0000 (0:00:04.980) 0:00:19.346 ******** 2026-01-30 04:52:54.674273 | orchestrator | changed: [localhost] 2026-01-30 04:52:54.674290 | orchestrator | 2026-01-30 04:52:54.674307 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-30 04:52:54.674322 | orchestrator | Friday 30 January 2026 04:52:43 +0000 (0:00:05.797) 0:00:25.143 ******** 2026-01-30 04:52:54.674339 | orchestrator | changed: [localhost] 2026-01-30 04:52:54.674356 | orchestrator | 2026-01-30 04:52:54.674372 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-30 04:52:54.674387 | orchestrator | Friday 30 January 2026 04:52:47 +0000 (0:00:03.962) 0:00:29.106 ******** 2026-01-30 04:52:54.674405 | orchestrator | changed: [localhost] 2026-01-30 04:52:54.674422 | orchestrator | 2026-01-30 04:52:54.674439 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-30 04:52:54.674466 | orchestrator | Friday 30 January 2026 04:52:50 +0000 (0:00:03.696) 0:00:32.802 ******** 2026-01-30 04:52:54.674483 | orchestrator | ok: [localhost] 2026-01-30 04:52:54.674499 | orchestrator | 2026-01-30 04:52:54.674516 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:52:54.674533 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 04:52:54.674551 | orchestrator | 2026-01-30 04:52:54.674565 | orchestrator | 2026-01-30 04:52:54.674581 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:52:54.674597 | orchestrator | Friday 30 January 2026 04:52:54 +0000 (0:00:03.467) 0:00:36.270 ******** 2026-01-30 04:52:54.674614 | orchestrator | =============================================================================== 2026-01-30 04:52:54.674630 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.56s 2026-01-30 04:52:54.674664 | orchestrator | Create volume type LUKS ------------------------------------------------- 5.93s 2026-01-30 04:52:54.674679 | orchestrator | Set public network to default ------------------------------------------- 5.80s 2026-01-30 04:52:54.674693 | orchestrator | Create public network --------------------------------------------------- 4.98s 2026-01-30 04:52:54.674735 | orchestrator | Create public subnet ---------------------------------------------------- 3.96s 2026-01-30 04:52:54.674751 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.70s 2026-01-30 04:52:54.674767 | orchestrator | Create manager role ----------------------------------------------------- 3.47s 2026-01-30 04:52:54.674782 | orchestrator | Gathering Facts --------------------------------------------------------- 1.81s 2026-01-30 04:52:57.168112 | orchestrator | 2026-01-30 04:52:57 | INFO  | It takes a moment until task f4ef417e-381b-46bd-a505-63354b5eeba5 (image-manager) has been started and output is visible here. 2026-01-30 04:53:39.654745 | orchestrator | 2026-01-30 04:52:59 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-30 04:53:39.654871 | orchestrator | 2026-01-30 04:53:00 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-30 04:53:39.654899 | orchestrator | 2026-01-30 04:53:00 | INFO  | Importing image Cirros 0.6.2 2026-01-30 04:53:39.654916 | orchestrator | 2026-01-30 04:53:00 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-30 04:53:39.654933 | orchestrator | 2026-01-30 04:53:02 | INFO  | Waiting for image to leave queued state... 2026-01-30 04:53:39.654950 | orchestrator | 2026-01-30 04:53:04 | INFO  | Waiting for import to complete... 2026-01-30 04:53:39.654967 | orchestrator | 2026-01-30 04:53:14 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-30 04:53:39.654986 | orchestrator | 2026-01-30 04:53:15 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-30 04:53:39.655003 | orchestrator | 2026-01-30 04:53:15 | INFO  | Setting internal_version = 0.6.2 2026-01-30 04:53:39.655022 | orchestrator | 2026-01-30 04:53:15 | INFO  | Setting image_original_user = cirros 2026-01-30 04:53:39.655041 | orchestrator | 2026-01-30 04:53:15 | INFO  | Adding tag os:cirros 2026-01-30 04:53:39.655056 | orchestrator | 2026-01-30 04:53:15 | INFO  | Setting property architecture: x86_64 2026-01-30 04:53:39.655073 | orchestrator | 2026-01-30 04:53:15 | INFO  | Setting property hw_disk_bus: scsi 2026-01-30 04:53:39.655092 | orchestrator | 2026-01-30 04:53:15 | INFO  | Setting property hw_rng_model: virtio 2026-01-30 04:53:39.655110 | orchestrator | 2026-01-30 04:53:16 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-30 04:53:39.655127 | orchestrator | 2026-01-30 04:53:16 | INFO  | Setting property hw_watchdog_action: reset 2026-01-30 04:53:39.655146 | orchestrator | 2026-01-30 04:53:16 | INFO  | Setting property hypervisor_type: qemu 2026-01-30 04:53:39.655163 | orchestrator | 2026-01-30 04:53:16 | INFO  | Setting property os_distro: cirros 2026-01-30 04:53:39.655179 | orchestrator | 2026-01-30 04:53:17 | INFO  | Setting property os_purpose: minimal 2026-01-30 04:53:39.655197 | orchestrator | 2026-01-30 04:53:17 | INFO  | Setting property replace_frequency: never 2026-01-30 04:53:39.655214 | orchestrator | 2026-01-30 04:53:17 | INFO  | Setting property uuid_validity: none 2026-01-30 04:53:39.655232 | orchestrator | 2026-01-30 04:53:17 | INFO  | Setting property provided_until: none 2026-01-30 04:53:39.655249 | orchestrator | 2026-01-30 04:53:18 | INFO  | Setting property image_description: Cirros 2026-01-30 04:53:39.655269 | orchestrator | 2026-01-30 04:53:18 | INFO  | Setting property image_name: Cirros 2026-01-30 04:53:39.655285 | orchestrator | 2026-01-30 04:53:18 | INFO  | Setting property internal_version: 0.6.2 2026-01-30 04:53:39.655301 | orchestrator | 2026-01-30 04:53:18 | INFO  | Setting property image_original_user: cirros 2026-01-30 04:53:39.655355 | orchestrator | 2026-01-30 04:53:18 | INFO  | Setting property os_version: 0.6.2 2026-01-30 04:53:39.655391 | orchestrator | 2026-01-30 04:53:19 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-30 04:53:39.655415 | orchestrator | 2026-01-30 04:53:19 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-30 04:53:39.655433 | orchestrator | 2026-01-30 04:53:19 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-30 04:53:39.655451 | orchestrator | 2026-01-30 04:53:19 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-30 04:53:39.655469 | orchestrator | 2026-01-30 04:53:19 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-30 04:53:39.655488 | orchestrator | 2026-01-30 04:53:19 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-30 04:53:39.655511 | orchestrator | 2026-01-30 04:53:20 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-30 04:53:39.655531 | orchestrator | 2026-01-30 04:53:20 | INFO  | Importing image Cirros 0.6.3 2026-01-30 04:53:39.655550 | orchestrator | 2026-01-30 04:53:20 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-30 04:53:39.655570 | orchestrator | 2026-01-30 04:53:21 | INFO  | Waiting for image to leave queued state... 2026-01-30 04:53:39.655589 | orchestrator | 2026-01-30 04:53:24 | INFO  | Waiting for import to complete... 2026-01-30 04:53:39.655634 | orchestrator | 2026-01-30 04:53:34 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-30 04:53:39.655686 | orchestrator | 2026-01-30 04:53:34 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-30 04:53:39.655702 | orchestrator | 2026-01-30 04:53:34 | INFO  | Setting internal_version = 0.6.3 2026-01-30 04:53:39.655717 | orchestrator | 2026-01-30 04:53:34 | INFO  | Setting image_original_user = cirros 2026-01-30 04:53:39.655731 | orchestrator | 2026-01-30 04:53:34 | INFO  | Adding tag os:cirros 2026-01-30 04:53:39.655746 | orchestrator | 2026-01-30 04:53:34 | INFO  | Setting property architecture: x86_64 2026-01-30 04:53:39.655761 | orchestrator | 2026-01-30 04:53:34 | INFO  | Setting property hw_disk_bus: scsi 2026-01-30 04:53:39.655777 | orchestrator | 2026-01-30 04:53:35 | INFO  | Setting property hw_rng_model: virtio 2026-01-30 04:53:39.655790 | orchestrator | 2026-01-30 04:53:35 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-30 04:53:39.655806 | orchestrator | 2026-01-30 04:53:35 | INFO  | Setting property hw_watchdog_action: reset 2026-01-30 04:53:39.655823 | orchestrator | 2026-01-30 04:53:35 | INFO  | Setting property hypervisor_type: qemu 2026-01-30 04:53:39.655839 | orchestrator | 2026-01-30 04:53:35 | INFO  | Setting property os_distro: cirros 2026-01-30 04:53:39.655856 | orchestrator | 2026-01-30 04:53:36 | INFO  | Setting property os_purpose: minimal 2026-01-30 04:53:39.655872 | orchestrator | 2026-01-30 04:53:36 | INFO  | Setting property replace_frequency: never 2026-01-30 04:53:39.655886 | orchestrator | 2026-01-30 04:53:36 | INFO  | Setting property uuid_validity: none 2026-01-30 04:53:39.655901 | orchestrator | 2026-01-30 04:53:36 | INFO  | Setting property provided_until: none 2026-01-30 04:53:39.655915 | orchestrator | 2026-01-30 04:53:37 | INFO  | Setting property image_description: Cirros 2026-01-30 04:53:39.655928 | orchestrator | 2026-01-30 04:53:37 | INFO  | Setting property image_name: Cirros 2026-01-30 04:53:39.655941 | orchestrator | 2026-01-30 04:53:37 | INFO  | Setting property internal_version: 0.6.3 2026-01-30 04:53:39.655972 | orchestrator | 2026-01-30 04:53:37 | INFO  | Setting property image_original_user: cirros 2026-01-30 04:53:39.655985 | orchestrator | 2026-01-30 04:53:38 | INFO  | Setting property os_version: 0.6.3 2026-01-30 04:53:39.655999 | orchestrator | 2026-01-30 04:53:38 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-30 04:53:39.656015 | orchestrator | 2026-01-30 04:53:38 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-30 04:53:39.656031 | orchestrator | 2026-01-30 04:53:38 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-30 04:53:39.656048 | orchestrator | 2026-01-30 04:53:38 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-30 04:53:39.656063 | orchestrator | 2026-01-30 04:53:38 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-30 04:53:39.920586 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-30 04:53:42.101897 | orchestrator | 2026-01-30 04:53:42 | INFO  | date: 2026-01-30 2026-01-30 04:53:42.102079 | orchestrator | 2026-01-30 04:53:42 | INFO  | image: octavia-amphora-haproxy-2024.2.20260130.qcow2 2026-01-30 04:53:42.102125 | orchestrator | 2026-01-30 04:53:42 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260130.qcow2 2026-01-30 04:53:42.102141 | orchestrator | 2026-01-30 04:53:42 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260130.qcow2.CHECKSUM 2026-01-30 04:53:42.570923 | orchestrator | 2026-01-30 04:53:42 | INFO  | checksum: 766ad57a2fe9cb95ceeb9d6b119e12f85b3471c34af7cff818d3a725402dc2e7 2026-01-30 04:53:42.649056 | orchestrator | 2026-01-30 04:53:42 | INFO  | It takes a moment until task d4b51f3a-3d6d-4454-96b2-324524f9bcfe (image-manager) has been started and output is visible here. 2026-01-30 04:54:54.959051 | orchestrator | 2026-01-30 04:53:44 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-30' 2026-01-30 04:54:54.959231 | orchestrator | 2026-01-30 04:53:45 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260130.qcow2: 200 2026-01-30 04:54:54.959265 | orchestrator | 2026-01-30 04:53:45 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-30 2026-01-30 04:54:54.959284 | orchestrator | 2026-01-30 04:53:45 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260130.qcow2 2026-01-30 04:54:54.959305 | orchestrator | 2026-01-30 04:53:46 | INFO  | Waiting for image to leave queued state... 2026-01-30 04:54:54.959324 | orchestrator | 2026-01-30 04:53:48 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959343 | orchestrator | 2026-01-30 04:53:58 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959362 | orchestrator | 2026-01-30 04:54:09 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959381 | orchestrator | 2026-01-30 04:54:19 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959403 | orchestrator | 2026-01-30 04:54:29 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959422 | orchestrator | 2026-01-30 04:54:39 | INFO  | Waiting for import to complete... 2026-01-30 04:54:54.959443 | orchestrator | 2026-01-30 04:54:49 | INFO  | Import of 'OpenStack Octavia Amphora 2026-01-30' successfully completed, reloading images 2026-01-30 04:54:54.959462 | orchestrator | 2026-01-30 04:54:49 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-01-30' 2026-01-30 04:54:54.959516 | orchestrator | 2026-01-30 04:54:49 | INFO  | Setting internal_version = 2026-01-30 2026-01-30 04:54:54.959538 | orchestrator | 2026-01-30 04:54:49 | INFO  | Setting image_original_user = ubuntu 2026-01-30 04:54:54.959558 | orchestrator | 2026-01-30 04:54:49 | INFO  | Adding tag amphora 2026-01-30 04:54:54.959577 | orchestrator | 2026-01-30 04:54:50 | INFO  | Adding tag os:ubuntu 2026-01-30 04:54:54.959597 | orchestrator | 2026-01-30 04:54:50 | INFO  | Setting property architecture: x86_64 2026-01-30 04:54:54.959615 | orchestrator | 2026-01-30 04:54:50 | INFO  | Setting property hw_disk_bus: scsi 2026-01-30 04:54:54.959635 | orchestrator | 2026-01-30 04:54:51 | INFO  | Setting property hw_rng_model: virtio 2026-01-30 04:54:54.959683 | orchestrator | 2026-01-30 04:54:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-30 04:54:54.959703 | orchestrator | 2026-01-30 04:54:51 | INFO  | Setting property hw_watchdog_action: reset 2026-01-30 04:54:54.959722 | orchestrator | 2026-01-30 04:54:51 | INFO  | Setting property hypervisor_type: qemu 2026-01-30 04:54:54.959741 | orchestrator | 2026-01-30 04:54:51 | INFO  | Setting property os_distro: ubuntu 2026-01-30 04:54:54.959760 | orchestrator | 2026-01-30 04:54:52 | INFO  | Setting property replace_frequency: quarterly 2026-01-30 04:54:54.959780 | orchestrator | 2026-01-30 04:54:52 | INFO  | Setting property uuid_validity: last-1 2026-01-30 04:54:54.959799 | orchestrator | 2026-01-30 04:54:52 | INFO  | Setting property provided_until: none 2026-01-30 04:54:54.959818 | orchestrator | 2026-01-30 04:54:52 | INFO  | Setting property os_purpose: network 2026-01-30 04:54:54.959856 | orchestrator | 2026-01-30 04:54:53 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-01-30 04:54:54.959876 | orchestrator | 2026-01-30 04:54:53 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-01-30 04:54:54.959895 | orchestrator | 2026-01-30 04:54:53 | INFO  | Setting property internal_version: 2026-01-30 2026-01-30 04:54:54.959913 | orchestrator | 2026-01-30 04:54:53 | INFO  | Setting property image_original_user: ubuntu 2026-01-30 04:54:54.959931 | orchestrator | 2026-01-30 04:54:53 | INFO  | Setting property os_version: 2026-01-30 2026-01-30 04:54:54.959951 | orchestrator | 2026-01-30 04:54:54 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260130.qcow2 2026-01-30 04:54:54.959969 | orchestrator | 2026-01-30 04:54:54 | INFO  | Setting property image_build_date: 2026-01-30 2026-01-30 04:54:54.959988 | orchestrator | 2026-01-30 04:54:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-01-30' 2026-01-30 04:54:54.960006 | orchestrator | 2026-01-30 04:54:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-01-30' 2026-01-30 04:54:54.960051 | orchestrator | 2026-01-30 04:54:54 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-30 04:54:54.960072 | orchestrator | 2026-01-30 04:54:54 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-30 04:54:54.960092 | orchestrator | 2026-01-30 04:54:54 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-30 04:54:54.960111 | orchestrator | 2026-01-30 04:54:54 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-30 04:54:55.409720 | orchestrator | ok: Runtime: 0:03:03.694172 2026-01-30 04:54:55.427848 | 2026-01-30 04:54:55.427977 | TASK [Run checks] 2026-01-30 04:54:56.187347 | orchestrator | + set -e 2026-01-30 04:54:56.187543 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 04:54:56.187568 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 04:54:56.187601 | orchestrator | ++ INTERACTIVE=false 2026-01-30 04:54:56.187637 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 04:54:56.187703 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 04:54:56.187726 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-30 04:54:56.188573 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-30 04:54:56.195429 | orchestrator | 2026-01-30 04:54:56.195535 | orchestrator | # CHECK 2026-01-30 04:54:56.195559 | orchestrator | 2026-01-30 04:54:56.195581 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:54:56.195614 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:54:56.195633 | orchestrator | + echo 2026-01-30 04:54:56.195682 | orchestrator | + echo '# CHECK' 2026-01-30 04:54:56.195704 | orchestrator | + echo 2026-01-30 04:54:56.195730 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-30 04:54:56.196217 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-30 04:54:56.262804 | orchestrator | 2026-01-30 04:54:56.262924 | orchestrator | ## Containers @ testbed-manager 2026-01-30 04:54:56.262949 | orchestrator | 2026-01-30 04:54:56.262970 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-30 04:54:56.262992 | orchestrator | + echo 2026-01-30 04:54:56.263012 | orchestrator | + echo '## Containers @ testbed-manager' 2026-01-30 04:54:56.263030 | orchestrator | + echo 2026-01-30 04:54:56.263051 | orchestrator | + osism container testbed-manager ps 2026-01-30 04:54:58.229995 | orchestrator | 2026-01-30 04:54:58 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-01-30 04:54:58.591354 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-30 04:54:58.591478 | orchestrator | 3182f103a46b registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_blackbox_exporter 2026-01-30 04:54:58.591503 | orchestrator | 767a9731630c registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_alertmanager 2026-01-30 04:54:58.591515 | orchestrator | 6dde5bb5dbc7 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-01-30 04:54:58.591525 | orchestrator | e126f96f87f1 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-01-30 04:54:58.591536 | orchestrator | 65a277ddd864 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_server 2026-01-30 04:54:58.591551 | orchestrator | 1104e77a042f registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 57 minutes cephclient 2026-01-30 04:54:58.591562 | orchestrator | 4aa292559ce1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-30 04:54:58.591572 | orchestrator | 65ba0633efd2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-30 04:54:58.591608 | orchestrator | 3145878e7f19 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-30 04:54:58.591620 | orchestrator | 7110672626b0 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-01-30 04:54:58.591630 | orchestrator | 65781c2d73c9 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-01-30 04:54:58.591640 | orchestrator | ac74878bb484 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-01-30 04:54:58.591680 | orchestrator | a93b9ed1873c registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-01-30 04:54:58.591691 | orchestrator | c90bda3f92c8 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-01-30 04:54:58.591721 | orchestrator | 5902a8c4e996 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-01-30 04:54:58.591741 | orchestrator | 1be394c33f14 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-01-30 04:54:58.591752 | orchestrator | 008b72db1911 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-01-30 04:54:58.591762 | orchestrator | 9d7725f5e734 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-01-30 04:54:58.591772 | orchestrator | c4c2efa0efa4 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-01-30 04:54:58.591782 | orchestrator | 2b5a00ada676 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-01-30 04:54:58.591792 | orchestrator | 71fddc656320 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-01-30 04:54:58.591802 | orchestrator | b541ab01a077 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-01-30 04:54:58.591813 | orchestrator | f3c3d898eb91 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-01-30 04:54:58.591829 | orchestrator | bb0230b377f9 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-01-30 04:54:58.591839 | orchestrator | be25dc07e74e registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-01-30 04:54:58.591849 | orchestrator | 7d727fb7a103 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-01-30 04:54:58.591859 | orchestrator | bd6f80ae5b54 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-01-30 04:54:58.591869 | orchestrator | 4647d05bc7b4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-01-30 04:54:58.591879 | orchestrator | 93691a8e59ec registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-01-30 04:54:58.591893 | orchestrator | e2589980ee5f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-01-30 04:54:58.861395 | orchestrator | 2026-01-30 04:54:58.861508 | orchestrator | ## Images @ testbed-manager 2026-01-30 04:54:58.861527 | orchestrator | 2026-01-30 04:54:58.861539 | orchestrator | + echo 2026-01-30 04:54:58.861553 | orchestrator | + echo '## Images @ testbed-manager' 2026-01-30 04:54:58.861565 | orchestrator | + echo 2026-01-30 04:54:58.861582 | orchestrator | + osism container testbed-manager images 2026-01-30 04:55:01.210232 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-30 04:55:01.210331 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 03fe6f79819d 25 hours ago 238MB 2026-01-30 04:55:01.210344 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 days ago 41.4MB 2026-01-30 04:55:01.210350 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 8 weeks ago 11.5MB 2026-01-30 04:55:01.210355 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-01-30 04:55:01.210360 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-01-30 04:55:01.210365 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-01-30 04:55:01.210369 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-01-30 04:55:01.210376 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-01-30 04:55:01.210380 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-01-30 04:55:01.210401 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-01-30 04:55:01.210406 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-01-30 04:55:01.210411 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-01-30 04:55:01.210415 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-01-30 04:55:01.210420 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-01-30 04:55:01.210424 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-01-30 04:55:01.210429 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-01-30 04:55:01.210433 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-01-30 04:55:01.210437 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-01-30 04:55:01.210442 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-01-30 04:55:01.210447 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-01-30 04:55:01.210451 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-01-30 04:55:01.210455 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-01-30 04:55:01.210460 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 8 months ago 453MB 2026-01-30 04:55:01.210464 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 19 months ago 146MB 2026-01-30 04:55:01.210469 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-01-30 04:55:01.590142 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-30 04:55:01.590554 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-30 04:55:01.647573 | orchestrator | 2026-01-30 04:55:01.647722 | orchestrator | ## Containers @ testbed-node-0 2026-01-30 04:55:01.647738 | orchestrator | 2026-01-30 04:55:01.647749 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-30 04:55:01.647758 | orchestrator | + echo 2026-01-30 04:55:01.647768 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-01-30 04:55:01.647778 | orchestrator | + echo 2026-01-30 04:55:01.647787 | orchestrator | + osism container testbed-node-0 ps 2026-01-30 04:55:04.031996 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-30 04:55:04.032103 | orchestrator | b5e06fd9b70d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-01-30 04:55:04.032142 | orchestrator | a76db59f91fe registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-01-30 04:55:04.032154 | orchestrator | 9d329cbfb11f registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-01-30 04:55:04.032165 | orchestrator | e6d799baaaa8 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes prometheus_elasticsearch_exporter 2026-01-30 04:55:04.032197 | orchestrator | 1d440b23a00f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-01-30 04:55:04.032207 | orchestrator | bc4f8914d602 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-01-30 04:55:04.032223 | orchestrator | 18da57642bbc registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-01-30 04:55:04.032233 | orchestrator | 02e375080e65 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-01-30 04:55:04.032243 | orchestrator | fd738c345376 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-01-30 04:55:04.032254 | orchestrator | 95e0451c60a0 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-01-30 04:55:04.032264 | orchestrator | 4aa4f8f2bbe1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-01-30 04:55:04.032274 | orchestrator | b12c48fe1d8f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-01-30 04:55:04.032284 | orchestrator | 709d5ebf8d57 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-01-30 04:55:04.032293 | orchestrator | 50052940e4c3 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-01-30 04:55:04.032303 | orchestrator | d104f0e08c21 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-01-30 04:55:04.032313 | orchestrator | 52146555f83d registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-01-30 04:55:04.032323 | orchestrator | e091b6964eda registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-01-30 04:55:04.032332 | orchestrator | 48f0c56214eb registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-01-30 04:55:04.032342 | orchestrator | d2f9ce05c282 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-01-30 04:55:04.032376 | orchestrator | cbc375aca1a3 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-01-30 04:55:04.032387 | orchestrator | 4b03e0cbf3d6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-01-30 04:55:04.032397 | orchestrator | a77a3337adf3 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-01-30 04:55:04.032414 | orchestrator | 2e8e7417d5ff registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-01-30 04:55:04.032424 | orchestrator | 1c3e6f0958ee registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-01-30 04:55:04.032434 | orchestrator | fa8169522c37 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-01-30 04:55:04.032449 | orchestrator | 1c0370b3e32a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 25 minutes (healthy) designate_producer 2026-01-30 04:55:04.032459 | orchestrator | 8305eeae1630 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-01-30 04:55:04.032469 | orchestrator | d7f98489127f registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-01-30 04:55:04.032479 | orchestrator | 182e3692875f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-01-30 04:55:04.032489 | orchestrator | 8b3bfab4f06f registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-01-30 04:55:04.032499 | orchestrator | ae2360bd0ec8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-01-30 04:55:04.032509 | orchestrator | de570d1519a5 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-01-30 04:55:04.032519 | orchestrator | 91dbe7b66f16 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-01-30 04:55:04.032529 | orchestrator | d30b967ef939 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-01-30 04:55:04.032539 | orchestrator | 91278b83a693 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-01-30 04:55:04.032549 | orchestrator | 4d1904dc9073 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-01-30 04:55:04.032559 | orchestrator | 0bafbf482107 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-01-30 04:55:04.032569 | orchestrator | 976ec1e6cd47 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-01-30 04:55:04.032578 | orchestrator | 5217ca76654c registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-01-30 04:55:04.032595 | orchestrator | 10ab28411722 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-01-30 04:55:04.032619 | orchestrator | 4fcfbdee1c10 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-01-30 04:55:04.032630 | orchestrator | 57b48dcb5b79 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-01-30 04:55:04.032645 | orchestrator | 5249edc3ad62 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-01-30 04:55:04.032684 | orchestrator | b2720949630a registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-01-30 04:55:04.032695 | orchestrator | 8f93b63cbc23 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-01-30 04:55:04.032705 | orchestrator | 18ecb8b0f0ef registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-01-30 04:55:04.032714 | orchestrator | 3b33779ea6df registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-01-30 04:55:04.032725 | orchestrator | 047e86915482 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-01-30 04:55:04.032734 | orchestrator | f81b5fdfbaf8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-01-30 04:55:04.032744 | orchestrator | 384afbd713e6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-01-30 04:55:04.032754 | orchestrator | d97a5fdc2f01 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-01-30 04:55:04.032764 | orchestrator | 9b4b4ef35663 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-01-30 04:55:04.032774 | orchestrator | be6e53a3980f registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-30 04:55:04.032784 | orchestrator | 2b3f28764e7a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-30 04:55:04.032794 | orchestrator | 2943d3f96e54 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-30 04:55:04.032804 | orchestrator | b04021436a5c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-30 04:55:04.032819 | orchestrator | 9ad72a3e4167 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-30 04:55:04.032829 | orchestrator | ad0751d55c7b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-30 04:55:04.032846 | orchestrator | 9363eb8d6f21 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-30 04:55:04.032863 | orchestrator | c1356c973bfa registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-30 04:55:04.032873 | orchestrator | cd8dfe6785fe registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-30 04:55:04.032883 | orchestrator | 7b8be0e39368 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-30 04:55:04.032893 | orchestrator | 41a83d08a346 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-30 04:55:04.032903 | orchestrator | 576dc47de06e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-30 04:55:04.032913 | orchestrator | 55028579a35a registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-01-30 04:55:04.032922 | orchestrator | eceff94f9ec2 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-01-30 04:55:04.032932 | orchestrator | 6f68512f2f55 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-01-30 04:55:04.032942 | orchestrator | 7615ea3b9a6c registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-01-30 04:55:04.032952 | orchestrator | 0f38ed6ba652 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-30 04:55:04.032962 | orchestrator | f3d81874f64b registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-30 04:55:04.032972 | orchestrator | d152dafbc110 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-30 04:55:04.405917 | orchestrator | 2026-01-30 04:55:04.406075 | orchestrator | ## Images @ testbed-node-0 2026-01-30 04:55:04.406104 | orchestrator | 2026-01-30 04:55:04.406125 | orchestrator | + echo 2026-01-30 04:55:04.406146 | orchestrator | + echo '## Images @ testbed-node-0' 2026-01-30 04:55:04.406176 | orchestrator | + echo 2026-01-30 04:55:04.406196 | orchestrator | + osism container testbed-node-0 images 2026-01-30 04:55:06.671143 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-30 04:55:06.671271 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-01-30 04:55:06.671290 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-01-30 04:55:06.671303 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-01-30 04:55:06.671314 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-01-30 04:55:06.671345 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-01-30 04:55:06.671357 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-01-30 04:55:06.671373 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-01-30 04:55:06.671389 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-01-30 04:55:06.671400 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-01-30 04:55:06.671411 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-01-30 04:55:06.671422 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-01-30 04:55:06.671433 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-01-30 04:55:06.671444 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-01-30 04:55:06.671455 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-01-30 04:55:06.671466 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-01-30 04:55:06.671477 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-01-30 04:55:06.671488 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-01-30 04:55:06.671499 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-01-30 04:55:06.671541 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-01-30 04:55:06.671552 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-01-30 04:55:06.671563 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-01-30 04:55:06.671574 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-01-30 04:55:06.671585 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-01-30 04:55:06.671596 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-01-30 04:55:06.671607 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-01-30 04:55:06.671618 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-01-30 04:55:06.671629 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-01-30 04:55:06.671647 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-01-30 04:55:06.671704 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-01-30 04:55:06.671719 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-01-30 04:55:06.671742 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-01-30 04:55:06.671777 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-01-30 04:55:06.671791 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-01-30 04:55:06.671804 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-01-30 04:55:06.671817 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-01-30 04:55:06.671829 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-01-30 04:55:06.671841 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-01-30 04:55:06.671854 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-01-30 04:55:06.671867 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-01-30 04:55:06.671878 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-01-30 04:55:06.671903 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-01-30 04:55:06.671915 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-01-30 04:55:06.671927 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-01-30 04:55:06.671940 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-01-30 04:55:06.671953 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-01-30 04:55:06.671973 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-01-30 04:55:06.671987 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-01-30 04:55:06.672000 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-01-30 04:55:06.672014 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-01-30 04:55:06.672025 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-01-30 04:55:06.672037 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-01-30 04:55:06.672047 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-01-30 04:55:06.672058 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-01-30 04:55:06.672069 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-01-30 04:55:06.672080 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-01-30 04:55:06.672091 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-01-30 04:55:06.672110 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-01-30 04:55:06.672121 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-01-30 04:55:06.672138 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-01-30 04:55:06.672149 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-01-30 04:55:06.672160 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-01-30 04:55:06.672171 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-01-30 04:55:06.672182 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-01-30 04:55:06.672201 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-01-30 04:55:06.672212 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-01-30 04:55:06.672226 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-01-30 04:55:06.672243 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-01-30 04:55:06.672254 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-01-30 04:55:06.672265 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-30 04:55:06.929487 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-30 04:55:06.930308 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-30 04:55:06.988125 | orchestrator | 2026-01-30 04:55:06.988249 | orchestrator | ## Containers @ testbed-node-1 2026-01-30 04:55:06.988279 | orchestrator | 2026-01-30 04:55:06.988299 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-30 04:55:06.988317 | orchestrator | + echo 2026-01-30 04:55:06.988335 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-01-30 04:55:06.988355 | orchestrator | + echo 2026-01-30 04:55:06.988373 | orchestrator | + osism container testbed-node-1 ps 2026-01-30 04:55:09.451069 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-30 04:55:09.451174 | orchestrator | 5ab1ec2119f2 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-01-30 04:55:09.451186 | orchestrator | 8c85db397f95 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-01-30 04:55:09.451193 | orchestrator | 9bdb3b0f8d4b registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-01-30 04:55:09.451200 | orchestrator | 629c7e58823d registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-01-30 04:55:09.451208 | orchestrator | 1b7aa579a1ff registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-01-30 04:55:09.451214 | orchestrator | ee2ea9493c9f registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-01-30 04:55:09.451239 | orchestrator | ea4b9dc7257d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-01-30 04:55:09.451246 | orchestrator | 418fcd1168b0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_node_exporter 2026-01-30 04:55:09.451254 | orchestrator | 29fcbe103961 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-01-30 04:55:09.451264 | orchestrator | 11eb14711a3c registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-01-30 04:55:09.451274 | orchestrator | e1fcf6d02b1f registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-01-30 04:55:09.451284 | orchestrator | a095aadf4fcc registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-01-30 04:55:09.451311 | orchestrator | 8298141a6b78 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-01-30 04:55:09.451321 | orchestrator | c1a5371bbe13 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-01-30 04:55:09.451330 | orchestrator | ac7dd984f185 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-01-30 04:55:09.451338 | orchestrator | 90db5f427450 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-01-30 04:55:09.451347 | orchestrator | 8529c9b0ff37 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-01-30 04:55:09.451358 | orchestrator | 9abfe4eecf4e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-01-30 04:55:09.451368 | orchestrator | bcc0df1e8f3a registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-01-30 04:55:09.451399 | orchestrator | 8c5bf27f36bf registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-01-30 04:55:09.451410 | orchestrator | de0a6a8c3a7b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-01-30 04:55:09.451420 | orchestrator | 3ad6c63ee467 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-01-30 04:55:09.451431 | orchestrator | 97481b493620 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-01-30 04:55:09.451441 | orchestrator | df2346ff8893 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-01-30 04:55:09.451458 | orchestrator | 8d013d070c34 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-01-30 04:55:09.451469 | orchestrator | 1acedc20a5ad registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-01-30 04:55:09.451478 | orchestrator | 601a2059578b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-01-30 04:55:09.451489 | orchestrator | fbc0d5bb1b6a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-01-30 04:55:09.451495 | orchestrator | 3a903402612f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-01-30 04:55:09.451501 | orchestrator | 29499fd9d56d registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-01-30 04:55:09.451507 | orchestrator | 76dd9869b27e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-01-30 04:55:09.451513 | orchestrator | b0dbe86bca8f registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-01-30 04:55:09.451518 | orchestrator | 8e113958ef13 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-01-30 04:55:09.451524 | orchestrator | 222a8bf71ca5 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-01-30 04:55:09.451530 | orchestrator | 71dcf82ee590 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-01-30 04:55:09.451536 | orchestrator | 412f36de24fa registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-01-30 04:55:09.451547 | orchestrator | 03b994717532 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-01-30 04:55:09.451553 | orchestrator | 246f226dfd64 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-01-30 04:55:09.451559 | orchestrator | fa5193894f0e registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-01-30 04:55:09.451570 | orchestrator | 3f6c69d019fc registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-01-30 04:55:09.451585 | orchestrator | e90aa35e71c9 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-01-30 04:55:09.451595 | orchestrator | 3b18dc660a59 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-01-30 04:55:09.451602 | orchestrator | cf53f2fb6eff registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-01-30 04:55:09.451609 | orchestrator | e1642a7f8143 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-01-30 04:55:09.451616 | orchestrator | 767d82ceba24 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-01-30 04:55:09.451623 | orchestrator | e4e94b56e620 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-01-30 04:55:09.451630 | orchestrator | cfb2086d7c7b registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-01-30 04:55:09.451636 | orchestrator | dbd67f0f3fd4 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-01-30 04:55:09.451647 | orchestrator | ec71bc66bf4b registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-01-30 04:55:09.451683 | orchestrator | 591dfddb0e29 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-01-30 04:55:09.451694 | orchestrator | 7aa8d151515d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-01-30 04:55:09.451703 | orchestrator | b97e426bfe4f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-01-30 04:55:09.451712 | orchestrator | 0ed00662f854 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-30 04:55:09.451721 | orchestrator | 1bb154ab42fe registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-30 04:55:09.451730 | orchestrator | 2adb4953e237 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-30 04:55:09.451740 | orchestrator | 75bad154b66d registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-30 04:55:09.451749 | orchestrator | cb8af988d286 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-30 04:55:09.451758 | orchestrator | 706bb8661d7a registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-30 04:55:09.451767 | orchestrator | 96d7e0e88baf registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-30 04:55:09.451785 | orchestrator | 3aff88926e31 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-30 04:55:09.451794 | orchestrator | 8c3f5a911933 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-30 04:55:09.451805 | orchestrator | 81c236dd647b registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-30 04:55:09.451812 | orchestrator | d0084311708b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-30 04:55:09.451818 | orchestrator | 29ec01315d47 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-30 04:55:09.451830 | orchestrator | 6cade27fd753 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-01-30 04:55:09.451836 | orchestrator | 52099f4398a5 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-01-30 04:55:09.451845 | orchestrator | 85c92dce077c registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-01-30 04:55:09.451854 | orchestrator | bd03a81b62cd registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-01-30 04:55:09.451870 | orchestrator | 28c68fef6ff9 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-30 04:55:09.451882 | orchestrator | 63e673196a61 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-30 04:55:09.451890 | orchestrator | a7ad59845856 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-30 04:55:09.727564 | orchestrator | 2026-01-30 04:55:09.727764 | orchestrator | ## Images @ testbed-node-1 2026-01-30 04:55:09.727795 | orchestrator | 2026-01-30 04:55:09.727816 | orchestrator | + echo 2026-01-30 04:55:09.727828 | orchestrator | + echo '## Images @ testbed-node-1' 2026-01-30 04:55:09.727841 | orchestrator | + echo 2026-01-30 04:55:09.727852 | orchestrator | + osism container testbed-node-1 images 2026-01-30 04:55:12.068160 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-30 04:55:12.068264 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-01-30 04:55:12.068278 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-01-30 04:55:12.068289 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-01-30 04:55:12.068300 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-01-30 04:55:12.068310 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-01-30 04:55:12.068320 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-01-30 04:55:12.068357 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-01-30 04:55:12.068376 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-01-30 04:55:12.068393 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-01-30 04:55:12.068409 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-01-30 04:55:12.068425 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-01-30 04:55:12.068441 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-01-30 04:55:12.068455 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-01-30 04:55:12.068470 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-01-30 04:55:12.068485 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-01-30 04:55:12.068501 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-01-30 04:55:12.068516 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-01-30 04:55:12.068532 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-01-30 04:55:12.068548 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-01-30 04:55:12.068563 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-01-30 04:55:12.068579 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-01-30 04:55:12.068595 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-01-30 04:55:12.068610 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-01-30 04:55:12.068626 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-01-30 04:55:12.068641 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-01-30 04:55:12.068681 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-01-30 04:55:12.068701 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-01-30 04:55:12.068719 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-01-30 04:55:12.068738 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-01-30 04:55:12.068755 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-01-30 04:55:12.068774 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-01-30 04:55:12.068817 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-01-30 04:55:12.068873 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-01-30 04:55:12.068885 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-01-30 04:55:12.068897 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-01-30 04:55:12.068909 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-01-30 04:55:12.068920 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-01-30 04:55:12.068949 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-01-30 04:55:12.068961 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-01-30 04:55:12.068973 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-01-30 04:55:12.068984 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-01-30 04:55:12.068995 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-01-30 04:55:12.069007 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-01-30 04:55:12.069019 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-01-30 04:55:12.069030 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-01-30 04:55:12.069042 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-01-30 04:55:12.069052 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-01-30 04:55:12.069062 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-01-30 04:55:12.069072 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-01-30 04:55:12.069082 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-01-30 04:55:12.069092 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-01-30 04:55:12.069101 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-01-30 04:55:12.069111 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-01-30 04:55:12.069121 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-01-30 04:55:12.069130 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-01-30 04:55:12.069141 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-01-30 04:55:12.069150 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-01-30 04:55:12.069160 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-01-30 04:55:12.069169 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-01-30 04:55:12.069186 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-01-30 04:55:12.069196 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-01-30 04:55:12.069206 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-01-30 04:55:12.069215 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-01-30 04:55:12.069234 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-01-30 04:55:12.069245 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-01-30 04:55:12.069254 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-01-30 04:55:12.069264 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-01-30 04:55:12.069273 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-01-30 04:55:12.069283 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-30 04:55:12.338431 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-01-30 04:55:12.338726 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-30 04:55:12.391934 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-30 04:55:12.392130 | orchestrator | 2026-01-30 04:55:12.392147 | orchestrator | ## Containers @ testbed-node-2 2026-01-30 04:55:12.392154 | orchestrator | 2026-01-30 04:55:12.392160 | orchestrator | + echo 2026-01-30 04:55:12.392166 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-01-30 04:55:12.392173 | orchestrator | + echo 2026-01-30 04:55:12.392178 | orchestrator | + osism container testbed-node-2 ps 2026-01-30 04:55:14.819987 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-01-30 04:55:14.820082 | orchestrator | f947f28e6105 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-01-30 04:55:14.820092 | orchestrator | a01b17fd35fc registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-01-30 04:55:14.820098 | orchestrator | 4bc846c20834 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-01-30 04:55:14.820103 | orchestrator | 91b6ee8fefb2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-01-30 04:55:14.820110 | orchestrator | 998aefd2382c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-01-30 04:55:14.820115 | orchestrator | fa2eae68ad62 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-01-30 04:55:14.820120 | orchestrator | 835de797cb09 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_mysqld_exporter 2026-01-30 04:55:14.820126 | orchestrator | 903e879bf399 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-01-30 04:55:14.820149 | orchestrator | 580f94bb06cd registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-01-30 04:55:14.820154 | orchestrator | d9dce009a323 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-01-30 04:55:14.820159 | orchestrator | 49218533bb97 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-01-30 04:55:14.820163 | orchestrator | ae66eafe419e registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-01-30 04:55:14.820183 | orchestrator | e6defca47399 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-01-30 04:55:14.820188 | orchestrator | 7bb4bd9c5810 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-01-30 04:55:14.820193 | orchestrator | 348e87fca711 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-01-30 04:55:14.820198 | orchestrator | 9c893f1e39bb registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-01-30 04:55:14.820202 | orchestrator | d476fa885faf registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-01-30 04:55:14.820207 | orchestrator | 4bb84df8eecd registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-01-30 04:55:14.820214 | orchestrator | 2155da327d02 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-01-30 04:55:14.820237 | orchestrator | 2d91b45b9310 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-01-30 04:55:14.820245 | orchestrator | d4480892f7be registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-01-30 04:55:14.820252 | orchestrator | 262307902de2 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-01-30 04:55:14.820259 | orchestrator | 284292c127d9 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-01-30 04:55:14.820265 | orchestrator | 18022389af3e registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-01-30 04:55:14.820272 | orchestrator | d9b638fa2ba2 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 25 minutes (healthy) designate_mdns 2026-01-30 04:55:14.820284 | orchestrator | 1f40f656aa91 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-01-30 04:55:14.820291 | orchestrator | 40ae502f7883 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-01-30 04:55:14.820298 | orchestrator | e5dcd3d402e5 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-01-30 04:55:14.820306 | orchestrator | 371d26f15401 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-01-30 04:55:14.820313 | orchestrator | d604917f59b6 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-01-30 04:55:14.820320 | orchestrator | 007939ca1fc7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-01-30 04:55:14.820327 | orchestrator | e147e88c5f5d registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-01-30 04:55:14.820334 | orchestrator | 6d42eebc9072 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-01-30 04:55:14.820340 | orchestrator | 122b1159cb92 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-01-30 04:55:14.820348 | orchestrator | c507a25f9d28 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-01-30 04:55:14.820355 | orchestrator | 8c6f8146f26f registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-01-30 04:55:14.820362 | orchestrator | ca6da1457ce3 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-01-30 04:55:14.820368 | orchestrator | 458d3685ca38 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_console 2026-01-30 04:55:14.820375 | orchestrator | 9bbd4d8c604b registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-01-30 04:55:14.820390 | orchestrator | 4818e63af1b0 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-01-30 04:55:14.820397 | orchestrator | 8c5c549a2700 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-01-30 04:55:14.820405 | orchestrator | 5ea9e9ba40ae registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-01-30 04:55:14.820784 | orchestrator | 895c017afd27 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_api 2026-01-30 04:55:14.820826 | orchestrator | 7e07b06f64ee registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-01-30 04:55:14.820835 | orchestrator | fde0ca1313f1 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-01-30 04:55:14.820842 | orchestrator | f64b32257c5c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-01-30 04:55:14.820850 | orchestrator | bf78e1f43062 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-01-30 04:55:14.820857 | orchestrator | b7834698c6e3 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-01-30 04:55:14.820865 | orchestrator | 03318182c175 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-01-30 04:55:14.820873 | orchestrator | b95096a29d27 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-01-30 04:55:14.820881 | orchestrator | b6e036a91674 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-01-30 04:55:14.820897 | orchestrator | 1f4acb9ff46e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-01-30 04:55:14.820905 | orchestrator | 94a2ba9446db registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-01-30 04:55:14.820915 | orchestrator | e07f4b8b2554 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-01-30 04:55:14.820923 | orchestrator | 937990d04602 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-01-30 04:55:14.820931 | orchestrator | 6ddb4d20f44b registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-01-30 04:55:14.820938 | orchestrator | 211369ea5d04 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-01-30 04:55:14.820945 | orchestrator | 6e8cba25779b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-01-30 04:55:14.820951 | orchestrator | 2bac1aa8adf7 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-01-30 04:55:14.820959 | orchestrator | fd4ecc2c3801 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-01-30 04:55:14.820967 | orchestrator | eae3e2fa8783 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-01-30 04:55:14.820980 | orchestrator | 3c230f4110e0 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-01-30 04:55:14.820996 | orchestrator | b17cd54a02ee registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-01-30 04:55:14.821003 | orchestrator | 7dfabc8aba8e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-01-30 04:55:14.821010 | orchestrator | 33ea5b2cadce registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-01-30 04:55:14.821017 | orchestrator | 78e7d3af8d01 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-01-30 04:55:14.821024 | orchestrator | aa0a93ff5335 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-01-30 04:55:14.821031 | orchestrator | 2ccec9939450 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-01-30 04:55:14.821038 | orchestrator | a5a8b283d861 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-01-30 04:55:14.821045 | orchestrator | cf3c8e172e0d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-01-30 04:55:14.821052 | orchestrator | 1078d595e36e registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-01-30 04:55:15.132102 | orchestrator | 2026-01-30 04:55:15.132165 | orchestrator | ## Images @ testbed-node-2 2026-01-30 04:55:15.132172 | orchestrator | 2026-01-30 04:55:15.132176 | orchestrator | + echo 2026-01-30 04:55:15.132181 | orchestrator | + echo '## Images @ testbed-node-2' 2026-01-30 04:55:15.132186 | orchestrator | + echo 2026-01-30 04:55:15.132191 | orchestrator | + osism container testbed-node-2 images 2026-01-30 04:55:17.596407 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-01-30 04:55:17.596497 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-01-30 04:55:17.596503 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-01-30 04:55:17.596508 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-01-30 04:55:17.596525 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-01-30 04:55:17.596529 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-01-30 04:55:17.596533 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-01-30 04:55:17.596537 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-01-30 04:55:17.596541 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-01-30 04:55:17.596566 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-01-30 04:55:17.596601 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-01-30 04:55:17.596613 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-01-30 04:55:17.596619 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-01-30 04:55:17.596624 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-01-30 04:55:17.596628 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-01-30 04:55:17.596632 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-01-30 04:55:17.596635 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-01-30 04:55:17.596639 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-01-30 04:55:17.596643 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-01-30 04:55:17.596647 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-01-30 04:55:17.596669 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-01-30 04:55:17.596676 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-01-30 04:55:17.596682 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-01-30 04:55:17.596688 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-01-30 04:55:17.596693 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-01-30 04:55:17.596699 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-01-30 04:55:17.596705 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-01-30 04:55:17.596711 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-01-30 04:55:17.596716 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-01-30 04:55:17.596723 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-01-30 04:55:17.596728 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-01-30 04:55:17.596735 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-01-30 04:55:17.596756 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-01-30 04:55:17.596762 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-01-30 04:55:17.596766 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-01-30 04:55:17.596770 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-01-30 04:55:17.596780 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-01-30 04:55:17.596784 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-01-30 04:55:17.596788 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-01-30 04:55:17.596798 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-01-30 04:55:17.596802 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-01-30 04:55:17.596806 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-01-30 04:55:17.596810 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-01-30 04:55:17.596814 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-01-30 04:55:17.596817 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-01-30 04:55:17.596821 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-01-30 04:55:17.596825 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-01-30 04:55:17.596829 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-01-30 04:55:17.596832 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-01-30 04:55:17.596836 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-01-30 04:55:17.596840 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-01-30 04:55:17.596844 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-01-30 04:55:17.596848 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-01-30 04:55:17.596851 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-01-30 04:55:17.596855 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-01-30 04:55:17.596859 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-01-30 04:55:17.596863 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-01-30 04:55:17.596866 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-01-30 04:55:17.596870 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-01-30 04:55:17.596874 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-01-30 04:55:17.596878 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-01-30 04:55:17.596881 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-01-30 04:55:17.596888 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-01-30 04:55:17.596892 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-01-30 04:55:17.596900 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-01-30 04:55:17.596904 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-01-30 04:55:17.596908 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-01-30 04:55:17.596912 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-01-30 04:55:17.596918 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-01-30 04:55:17.596922 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 8 months ago 1.27GB 2026-01-30 04:55:17.929877 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-01-30 04:55:17.936503 | orchestrator | + set -e 2026-01-30 04:55:17.936593 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 04:55:17.936608 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 04:55:17.936619 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 04:55:17.936630 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 04:55:17.936640 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 04:55:17.936650 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 04:55:17.936716 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 04:55:17.936727 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:55:17.936737 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:55:17.936747 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 04:55:17.936757 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 04:55:17.936767 | orchestrator | ++ export ARA=false 2026-01-30 04:55:17.936778 | orchestrator | ++ ARA=false 2026-01-30 04:55:17.936787 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 04:55:17.936797 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 04:55:17.936807 | orchestrator | ++ export TEMPEST=false 2026-01-30 04:55:17.936817 | orchestrator | ++ TEMPEST=false 2026-01-30 04:55:17.936826 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 04:55:17.936836 | orchestrator | ++ IS_ZUUL=true 2026-01-30 04:55:17.936846 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:55:17.936856 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:55:17.936866 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 04:55:17.936875 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 04:55:17.936885 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 04:55:17.936894 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 04:55:17.936905 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 04:55:17.936915 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 04:55:17.936925 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 04:55:17.936934 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 04:55:17.936944 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 04:55:17.936954 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-01-30 04:55:17.947171 | orchestrator | + set -e 2026-01-30 04:55:17.947248 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 04:55:17.947259 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 04:55:17.947269 | orchestrator | ++ INTERACTIVE=false 2026-01-30 04:55:17.947276 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 04:55:17.947284 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 04:55:17.947292 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-30 04:55:17.948312 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-30 04:55:17.955259 | orchestrator | 2026-01-30 04:55:17.955338 | orchestrator | # Ceph status 2026-01-30 04:55:17.955351 | orchestrator | 2026-01-30 04:55:17.955361 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:55:17.955371 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:55:17.955381 | orchestrator | + echo 2026-01-30 04:55:17.955391 | orchestrator | + echo '# Ceph status' 2026-01-30 04:55:17.955422 | orchestrator | + echo 2026-01-30 04:55:17.955432 | orchestrator | + ceph -s 2026-01-30 04:55:18.562278 | orchestrator | cluster: 2026-01-30 04:55:18.562363 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-01-30 04:55:18.562376 | orchestrator | health: HEALTH_OK 2026-01-30 04:55:18.562386 | orchestrator | 2026-01-30 04:55:18.562394 | orchestrator | services: 2026-01-30 04:55:18.562403 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 68m) 2026-01-30 04:55:18.562424 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-01-30 04:55:18.562433 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-01-30 04:55:18.562442 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-01-30 04:55:18.562450 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-01-30 04:55:18.562459 | orchestrator | 2026-01-30 04:55:18.562467 | orchestrator | data: 2026-01-30 04:55:18.562475 | orchestrator | volumes: 1/1 healthy 2026-01-30 04:55:18.562483 | orchestrator | pools: 14 pools, 401 pgs 2026-01-30 04:55:18.562492 | orchestrator | objects: 555 objects, 2.2 GiB 2026-01-30 04:55:18.562500 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-01-30 04:55:18.562508 | orchestrator | pgs: 401 active+clean 2026-01-30 04:55:18.562516 | orchestrator | 2026-01-30 04:55:18.606332 | orchestrator | 2026-01-30 04:55:18.606495 | orchestrator | # Ceph versions 2026-01-30 04:55:18.606508 | orchestrator | 2026-01-30 04:55:18.606519 | orchestrator | + echo 2026-01-30 04:55:18.606529 | orchestrator | + echo '# Ceph versions' 2026-01-30 04:55:18.606540 | orchestrator | + echo 2026-01-30 04:55:18.606550 | orchestrator | + ceph versions 2026-01-30 04:55:19.207121 | orchestrator | { 2026-01-30 04:55:19.207239 | orchestrator | "mon": { 2026-01-30 04:55:19.207256 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-30 04:55:19.207269 | orchestrator | }, 2026-01-30 04:55:19.207281 | orchestrator | "mgr": { 2026-01-30 04:55:19.207292 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-30 04:55:19.207303 | orchestrator | }, 2026-01-30 04:55:19.207314 | orchestrator | "osd": { 2026-01-30 04:55:19.207325 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-01-30 04:55:19.207336 | orchestrator | }, 2026-01-30 04:55:19.207347 | orchestrator | "mds": { 2026-01-30 04:55:19.207358 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-30 04:55:19.207369 | orchestrator | }, 2026-01-30 04:55:19.207380 | orchestrator | "rgw": { 2026-01-30 04:55:19.207391 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-01-30 04:55:19.207417 | orchestrator | }, 2026-01-30 04:55:19.207428 | orchestrator | "overall": { 2026-01-30 04:55:19.207441 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-01-30 04:55:19.207452 | orchestrator | } 2026-01-30 04:55:19.207463 | orchestrator | } 2026-01-30 04:55:19.252209 | orchestrator | 2026-01-30 04:55:19.252315 | orchestrator | # Ceph OSD tree 2026-01-30 04:55:19.252336 | orchestrator | 2026-01-30 04:55:19.252351 | orchestrator | + echo 2026-01-30 04:55:19.252366 | orchestrator | + echo '# Ceph OSD tree' 2026-01-30 04:55:19.252382 | orchestrator | + echo 2026-01-30 04:55:19.252396 | orchestrator | + ceph osd df tree 2026-01-30 04:55:19.715804 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-01-30 04:55:19.715929 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 369 MiB 113 GiB 5.87 1.00 - root default 2026-01-30 04:55:19.715950 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-01-30 04:55:19.715963 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.16 1.22 201 up osd.0 2026-01-30 04:55:19.715972 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 936 MiB 875 MiB 1 KiB 62 MiB 19 GiB 4.58 0.78 189 up osd.5 2026-01-30 04:55:19.715980 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-01-30 04:55:19.715988 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 953 MiB 891 MiB 1 KiB 62 MiB 19 GiB 4.66 0.79 177 up osd.1 2026-01-30 04:55:19.716018 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.08 1.21 215 up osd.3 2026-01-30 04:55:19.716027 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-01-30 04:55:19.716036 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 7.00 1.19 199 up osd.2 2026-01-30 04:55:19.716044 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 968 MiB 907 MiB 1 KiB 62 MiB 19 GiB 4.73 0.81 189 up osd.4 2026-01-30 04:55:19.716052 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 369 MiB 113 GiB 5.87 2026-01-30 04:55:19.716060 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 1.21 2026-01-30 04:55:19.758471 | orchestrator | 2026-01-30 04:55:19.758542 | orchestrator | # Ceph monitor status 2026-01-30 04:55:19.758553 | orchestrator | 2026-01-30 04:55:19.758562 | orchestrator | + echo 2026-01-30 04:55:19.758569 | orchestrator | + echo '# Ceph monitor status' 2026-01-30 04:55:19.758576 | orchestrator | + echo 2026-01-30 04:55:19.758581 | orchestrator | + ceph mon stat 2026-01-30 04:55:20.344471 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-01-30 04:55:20.393429 | orchestrator | 2026-01-30 04:55:20.393614 | orchestrator | # Ceph quorum status 2026-01-30 04:55:20.393632 | orchestrator | 2026-01-30 04:55:20.393644 | orchestrator | + echo 2026-01-30 04:55:20.393711 | orchestrator | + echo '# Ceph quorum status' 2026-01-30 04:55:20.393725 | orchestrator | + echo 2026-01-30 04:55:20.393823 | orchestrator | + ceph quorum_status 2026-01-30 04:55:20.394946 | orchestrator | + jq 2026-01-30 04:55:21.027249 | orchestrator | { 2026-01-30 04:55:21.027337 | orchestrator | "election_epoch": 8, 2026-01-30 04:55:21.027348 | orchestrator | "quorum": [ 2026-01-30 04:55:21.027357 | orchestrator | 0, 2026-01-30 04:55:21.027365 | orchestrator | 1, 2026-01-30 04:55:21.027373 | orchestrator | 2 2026-01-30 04:55:21.027385 | orchestrator | ], 2026-01-30 04:55:21.027399 | orchestrator | "quorum_names": [ 2026-01-30 04:55:21.027412 | orchestrator | "testbed-node-0", 2026-01-30 04:55:21.027425 | orchestrator | "testbed-node-1", 2026-01-30 04:55:21.027439 | orchestrator | "testbed-node-2" 2026-01-30 04:55:21.027452 | orchestrator | ], 2026-01-30 04:55:21.027464 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-01-30 04:55:21.027479 | orchestrator | "quorum_age": 4105, 2026-01-30 04:55:21.027493 | orchestrator | "features": { 2026-01-30 04:55:21.027507 | orchestrator | "quorum_con": "4540138322906710015", 2026-01-30 04:55:21.027521 | orchestrator | "quorum_mon": [ 2026-01-30 04:55:21.027535 | orchestrator | "kraken", 2026-01-30 04:55:21.027548 | orchestrator | "luminous", 2026-01-30 04:55:21.027560 | orchestrator | "mimic", 2026-01-30 04:55:21.027568 | orchestrator | "osdmap-prune", 2026-01-30 04:55:21.027576 | orchestrator | "nautilus", 2026-01-30 04:55:21.027584 | orchestrator | "octopus", 2026-01-30 04:55:21.027592 | orchestrator | "pacific", 2026-01-30 04:55:21.027600 | orchestrator | "elector-pinging", 2026-01-30 04:55:21.027608 | orchestrator | "quincy", 2026-01-30 04:55:21.027616 | orchestrator | "reef" 2026-01-30 04:55:21.027624 | orchestrator | ] 2026-01-30 04:55:21.027632 | orchestrator | }, 2026-01-30 04:55:21.027639 | orchestrator | "monmap": { 2026-01-30 04:55:21.027647 | orchestrator | "epoch": 1, 2026-01-30 04:55:21.027680 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-01-30 04:55:21.027691 | orchestrator | "modified": "2026-01-30T03:46:39.234559Z", 2026-01-30 04:55:21.027699 | orchestrator | "created": "2026-01-30T03:46:39.234559Z", 2026-01-30 04:55:21.027707 | orchestrator | "min_mon_release": 18, 2026-01-30 04:55:21.027717 | orchestrator | "min_mon_release_name": "reef", 2026-01-30 04:55:21.027726 | orchestrator | "election_strategy": 1, 2026-01-30 04:55:21.027735 | orchestrator | "disallowed_leaders: ": "", 2026-01-30 04:55:21.027744 | orchestrator | "stretch_mode": false, 2026-01-30 04:55:21.027753 | orchestrator | "tiebreaker_mon": "", 2026-01-30 04:55:21.027762 | orchestrator | "removed_ranks: ": "", 2026-01-30 04:55:21.027771 | orchestrator | "features": { 2026-01-30 04:55:21.027784 | orchestrator | "persistent": [ 2026-01-30 04:55:21.027798 | orchestrator | "kraken", 2026-01-30 04:55:21.027811 | orchestrator | "luminous", 2026-01-30 04:55:21.027849 | orchestrator | "mimic", 2026-01-30 04:55:21.027865 | orchestrator | "osdmap-prune", 2026-01-30 04:55:21.027879 | orchestrator | "nautilus", 2026-01-30 04:55:21.027892 | orchestrator | "octopus", 2026-01-30 04:55:21.027905 | orchestrator | "pacific", 2026-01-30 04:55:21.027915 | orchestrator | "elector-pinging", 2026-01-30 04:55:21.027924 | orchestrator | "quincy", 2026-01-30 04:55:21.027933 | orchestrator | "reef" 2026-01-30 04:55:21.027943 | orchestrator | ], 2026-01-30 04:55:21.027952 | orchestrator | "optional": [] 2026-01-30 04:55:21.027961 | orchestrator | }, 2026-01-30 04:55:21.027970 | orchestrator | "mons": [ 2026-01-30 04:55:21.027979 | orchestrator | { 2026-01-30 04:55:21.028003 | orchestrator | "rank": 0, 2026-01-30 04:55:21.028013 | orchestrator | "name": "testbed-node-0", 2026-01-30 04:55:21.028022 | orchestrator | "public_addrs": { 2026-01-30 04:55:21.028031 | orchestrator | "addrvec": [ 2026-01-30 04:55:21.028040 | orchestrator | { 2026-01-30 04:55:21.028049 | orchestrator | "type": "v2", 2026-01-30 04:55:21.028059 | orchestrator | "addr": "192.168.16.8:3300", 2026-01-30 04:55:21.028068 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028078 | orchestrator | }, 2026-01-30 04:55:21.028087 | orchestrator | { 2026-01-30 04:55:21.028096 | orchestrator | "type": "v1", 2026-01-30 04:55:21.028104 | orchestrator | "addr": "192.168.16.8:6789", 2026-01-30 04:55:21.028112 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028120 | orchestrator | } 2026-01-30 04:55:21.028128 | orchestrator | ] 2026-01-30 04:55:21.028139 | orchestrator | }, 2026-01-30 04:55:21.028152 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-01-30 04:55:21.028165 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-01-30 04:55:21.028178 | orchestrator | "priority": 0, 2026-01-30 04:55:21.028192 | orchestrator | "weight": 0, 2026-01-30 04:55:21.028205 | orchestrator | "crush_location": "{}" 2026-01-30 04:55:21.028219 | orchestrator | }, 2026-01-30 04:55:21.028232 | orchestrator | { 2026-01-30 04:55:21.028246 | orchestrator | "rank": 1, 2026-01-30 04:55:21.028254 | orchestrator | "name": "testbed-node-1", 2026-01-30 04:55:21.028262 | orchestrator | "public_addrs": { 2026-01-30 04:55:21.028270 | orchestrator | "addrvec": [ 2026-01-30 04:55:21.028277 | orchestrator | { 2026-01-30 04:55:21.028285 | orchestrator | "type": "v2", 2026-01-30 04:55:21.028293 | orchestrator | "addr": "192.168.16.11:3300", 2026-01-30 04:55:21.028301 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028309 | orchestrator | }, 2026-01-30 04:55:21.028317 | orchestrator | { 2026-01-30 04:55:21.028325 | orchestrator | "type": "v1", 2026-01-30 04:55:21.028333 | orchestrator | "addr": "192.168.16.11:6789", 2026-01-30 04:55:21.028341 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028348 | orchestrator | } 2026-01-30 04:55:21.028356 | orchestrator | ] 2026-01-30 04:55:21.028364 | orchestrator | }, 2026-01-30 04:55:21.028372 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-01-30 04:55:21.028380 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-01-30 04:55:21.028388 | orchestrator | "priority": 0, 2026-01-30 04:55:21.028396 | orchestrator | "weight": 0, 2026-01-30 04:55:21.028404 | orchestrator | "crush_location": "{}" 2026-01-30 04:55:21.028412 | orchestrator | }, 2026-01-30 04:55:21.028419 | orchestrator | { 2026-01-30 04:55:21.028427 | orchestrator | "rank": 2, 2026-01-30 04:55:21.028435 | orchestrator | "name": "testbed-node-2", 2026-01-30 04:55:21.028443 | orchestrator | "public_addrs": { 2026-01-30 04:55:21.028451 | orchestrator | "addrvec": [ 2026-01-30 04:55:21.028459 | orchestrator | { 2026-01-30 04:55:21.028467 | orchestrator | "type": "v2", 2026-01-30 04:55:21.028474 | orchestrator | "addr": "192.168.16.12:3300", 2026-01-30 04:55:21.028483 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028497 | orchestrator | }, 2026-01-30 04:55:21.028510 | orchestrator | { 2026-01-30 04:55:21.028523 | orchestrator | "type": "v1", 2026-01-30 04:55:21.028537 | orchestrator | "addr": "192.168.16.12:6789", 2026-01-30 04:55:21.028550 | orchestrator | "nonce": 0 2026-01-30 04:55:21.028564 | orchestrator | } 2026-01-30 04:55:21.028578 | orchestrator | ] 2026-01-30 04:55:21.028592 | orchestrator | }, 2026-01-30 04:55:21.028605 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-01-30 04:55:21.028618 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-01-30 04:55:21.028627 | orchestrator | "priority": 0, 2026-01-30 04:55:21.028642 | orchestrator | "weight": 0, 2026-01-30 04:55:21.028650 | orchestrator | "crush_location": "{}" 2026-01-30 04:55:21.028716 | orchestrator | } 2026-01-30 04:55:21.028725 | orchestrator | ] 2026-01-30 04:55:21.028733 | orchestrator | } 2026-01-30 04:55:21.028741 | orchestrator | } 2026-01-30 04:55:21.028900 | orchestrator | 2026-01-30 04:55:21.028921 | orchestrator | # Ceph free space status 2026-01-30 04:55:21.028936 | orchestrator | 2026-01-30 04:55:21.028949 | orchestrator | + echo 2026-01-30 04:55:21.028962 | orchestrator | + echo '# Ceph free space status' 2026-01-30 04:55:21.028974 | orchestrator | + echo 2026-01-30 04:55:21.028982 | orchestrator | + ceph df 2026-01-30 04:55:21.600186 | orchestrator | --- RAW STORAGE --- 2026-01-30 04:55:21.600309 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-01-30 04:55:21.600348 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-01-30 04:55:21.600380 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-01-30 04:55:21.600398 | orchestrator | 2026-01-30 04:55:21.600416 | orchestrator | --- POOLS --- 2026-01-30 04:55:21.600432 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-01-30 04:55:21.600451 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-01-30 04:55:21.600467 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-01-30 04:55:21.600484 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-01-30 04:55:21.600501 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-01-30 04:55:21.600517 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-01-30 04:55:21.600534 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-01-30 04:55:21.600551 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-01-30 04:55:21.600567 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-01-30 04:55:21.600576 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2026-01-30 04:55:21.600586 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-01-30 04:55:21.600596 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-01-30 04:55:21.600605 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2026-01-30 04:55:21.600615 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-01-30 04:55:21.600624 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-01-30 04:55:21.644179 | orchestrator | ++ semver 9.5.0 5.0.0 2026-01-30 04:55:21.705084 | orchestrator | + [[ 1 -eq -1 ]] 2026-01-30 04:55:21.705176 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-01-30 04:55:21.705191 | orchestrator | + osism apply facts 2026-01-30 04:55:33.802404 | orchestrator | 2026-01-30 04:55:33 | INFO  | Task 068b03c9-95bc-4235-b29d-6efe69c0d50f (facts) was prepared for execution. 2026-01-30 04:55:33.802604 | orchestrator | 2026-01-30 04:55:33 | INFO  | It takes a moment until task 068b03c9-95bc-4235-b29d-6efe69c0d50f (facts) has been started and output is visible here. 2026-01-30 04:55:47.038787 | orchestrator | 2026-01-30 04:55:47.038922 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-30 04:55:47.038941 | orchestrator | 2026-01-30 04:55:47.038953 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-30 04:55:47.038965 | orchestrator | Friday 30 January 2026 04:55:37 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-01-30 04:55:47.038976 | orchestrator | ok: [testbed-manager] 2026-01-30 04:55:47.038988 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:55:47.038999 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:55:47.039010 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:55:47.039021 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:55:47.039032 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:55:47.039043 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:55:47.039054 | orchestrator | 2026-01-30 04:55:47.039065 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-30 04:55:47.039102 | orchestrator | Friday 30 January 2026 04:55:39 +0000 (0:00:01.253) 0:00:01.511 ******** 2026-01-30 04:55:47.039114 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:55:47.039125 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:55:47.039136 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:55:47.039147 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:55:47.039159 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:55:47.039169 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:55:47.039180 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:55:47.039191 | orchestrator | 2026-01-30 04:55:47.039202 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 04:55:47.039213 | orchestrator | 2026-01-30 04:55:47.039224 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 04:55:47.039235 | orchestrator | Friday 30 January 2026 04:55:40 +0000 (0:00:01.371) 0:00:02.883 ******** 2026-01-30 04:55:47.039248 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:55:47.039262 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:55:47.039274 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:55:47.039286 | orchestrator | ok: [testbed-manager] 2026-01-30 04:55:47.039299 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:55:47.039311 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:55:47.039324 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:55:47.039336 | orchestrator | 2026-01-30 04:55:47.039349 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-30 04:55:47.039362 | orchestrator | 2026-01-30 04:55:47.039375 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-30 04:55:47.039388 | orchestrator | Friday 30 January 2026 04:55:46 +0000 (0:00:05.518) 0:00:08.401 ******** 2026-01-30 04:55:47.039401 | orchestrator | skipping: [testbed-manager] 2026-01-30 04:55:47.039414 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:55:47.039427 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:55:47.039439 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:55:47.039452 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:55:47.039465 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:55:47.039477 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:55:47.039490 | orchestrator | 2026-01-30 04:55:47.039503 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:55:47.039516 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039530 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039543 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039572 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039586 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039599 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039612 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:55:47.039625 | orchestrator | 2026-01-30 04:55:47.039638 | orchestrator | 2026-01-30 04:55:47.039649 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:55:47.039691 | orchestrator | Friday 30 January 2026 04:55:46 +0000 (0:00:00.569) 0:00:08.970 ******** 2026-01-30 04:55:47.039702 | orchestrator | =============================================================================== 2026-01-30 04:55:47.039713 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.52s 2026-01-30 04:55:47.039749 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-01-30 04:55:47.039771 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-01-30 04:55:47.039783 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-01-30 04:55:47.337629 | orchestrator | + osism validate ceph-mons 2026-01-30 04:56:11.814095 | orchestrator | 2026-01-30 04:56:11.814236 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-01-30 04:56:11.814251 | orchestrator | 2026-01-30 04:56:11.814257 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-30 04:56:11.814264 | orchestrator | Friday 30 January 2026 04:55:56 +0000 (0:00:00.419) 0:00:00.419 ******** 2026-01-30 04:56:11.814271 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.814278 | orchestrator | 2026-01-30 04:56:11.814284 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-30 04:56:11.814291 | orchestrator | Friday 30 January 2026 04:55:57 +0000 (0:00:00.773) 0:00:01.192 ******** 2026-01-30 04:56:11.814297 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.814304 | orchestrator | 2026-01-30 04:56:11.814309 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-30 04:56:11.814315 | orchestrator | Friday 30 January 2026 04:55:58 +0000 (0:00:00.938) 0:00:02.130 ******** 2026-01-30 04:56:11.814321 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814327 | orchestrator | 2026-01-30 04:56:11.814333 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-30 04:56:11.814339 | orchestrator | Friday 30 January 2026 04:55:58 +0000 (0:00:00.131) 0:00:02.262 ******** 2026-01-30 04:56:11.814347 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814353 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:11.814358 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:11.814364 | orchestrator | 2026-01-30 04:56:11.814371 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-30 04:56:11.814377 | orchestrator | Friday 30 January 2026 04:55:58 +0000 (0:00:00.263) 0:00:02.525 ******** 2026-01-30 04:56:11.814383 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:11.814389 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814395 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:11.814401 | orchestrator | 2026-01-30 04:56:11.814407 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-30 04:56:11.814414 | orchestrator | Friday 30 January 2026 04:55:59 +0000 (0:00:01.008) 0:00:03.534 ******** 2026-01-30 04:56:11.814420 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814426 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:56:11.814433 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:56:11.814439 | orchestrator | 2026-01-30 04:56:11.814446 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-30 04:56:11.814452 | orchestrator | Friday 30 January 2026 04:56:00 +0000 (0:00:00.284) 0:00:03.819 ******** 2026-01-30 04:56:11.814458 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814465 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:11.814471 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:11.814477 | orchestrator | 2026-01-30 04:56:11.814483 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:56:11.814489 | orchestrator | Friday 30 January 2026 04:56:00 +0000 (0:00:00.509) 0:00:04.328 ******** 2026-01-30 04:56:11.814496 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814502 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:11.814508 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:11.814515 | orchestrator | 2026-01-30 04:56:11.814521 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-01-30 04:56:11.814528 | orchestrator | Friday 30 January 2026 04:56:00 +0000 (0:00:00.293) 0:00:04.622 ******** 2026-01-30 04:56:11.814534 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814541 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:56:11.814569 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:56:11.814577 | orchestrator | 2026-01-30 04:56:11.814582 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-01-30 04:56:11.814586 | orchestrator | Friday 30 January 2026 04:56:01 +0000 (0:00:00.319) 0:00:04.941 ******** 2026-01-30 04:56:11.814590 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814594 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:11.814598 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:11.814603 | orchestrator | 2026-01-30 04:56:11.814607 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:56:11.814612 | orchestrator | Friday 30 January 2026 04:56:01 +0000 (0:00:00.435) 0:00:05.377 ******** 2026-01-30 04:56:11.814616 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814621 | orchestrator | 2026-01-30 04:56:11.814625 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:56:11.814629 | orchestrator | Friday 30 January 2026 04:56:01 +0000 (0:00:00.255) 0:00:05.632 ******** 2026-01-30 04:56:11.814634 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814638 | orchestrator | 2026-01-30 04:56:11.814643 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:56:11.814647 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.249) 0:00:05.882 ******** 2026-01-30 04:56:11.814651 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814696 | orchestrator | 2026-01-30 04:56:11.814701 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:11.814706 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.241) 0:00:06.123 ******** 2026-01-30 04:56:11.814710 | orchestrator | 2026-01-30 04:56:11.814715 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:11.814719 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.068) 0:00:06.192 ******** 2026-01-30 04:56:11.814723 | orchestrator | 2026-01-30 04:56:11.814728 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:11.814732 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.072) 0:00:06.264 ******** 2026-01-30 04:56:11.814737 | orchestrator | 2026-01-30 04:56:11.814741 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:56:11.814746 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.071) 0:00:06.335 ******** 2026-01-30 04:56:11.814750 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814755 | orchestrator | 2026-01-30 04:56:11.814759 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-30 04:56:11.814777 | orchestrator | Friday 30 January 2026 04:56:02 +0000 (0:00:00.235) 0:00:06.571 ******** 2026-01-30 04:56:11.814782 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814786 | orchestrator | 2026-01-30 04:56:11.814804 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-01-30 04:56:11.814809 | orchestrator | Friday 30 January 2026 04:56:03 +0000 (0:00:00.275) 0:00:06.846 ******** 2026-01-30 04:56:11.814813 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814818 | orchestrator | 2026-01-30 04:56:11.814822 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-01-30 04:56:11.814826 | orchestrator | Friday 30 January 2026 04:56:03 +0000 (0:00:00.113) 0:00:06.960 ******** 2026-01-30 04:56:11.814831 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:56:11.814838 | orchestrator | 2026-01-30 04:56:11.814843 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-01-30 04:56:11.814847 | orchestrator | Friday 30 January 2026 04:56:04 +0000 (0:00:01.600) 0:00:08.560 ******** 2026-01-30 04:56:11.814851 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814856 | orchestrator | 2026-01-30 04:56:11.814860 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-01-30 04:56:11.814864 | orchestrator | Friday 30 January 2026 04:56:05 +0000 (0:00:00.465) 0:00:09.026 ******** 2026-01-30 04:56:11.814868 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814878 | orchestrator | 2026-01-30 04:56:11.814883 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-01-30 04:56:11.814887 | orchestrator | Friday 30 January 2026 04:56:05 +0000 (0:00:00.125) 0:00:09.152 ******** 2026-01-30 04:56:11.814892 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814896 | orchestrator | 2026-01-30 04:56:11.814900 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-01-30 04:56:11.814904 | orchestrator | Friday 30 January 2026 04:56:05 +0000 (0:00:00.305) 0:00:09.457 ******** 2026-01-30 04:56:11.814909 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814913 | orchestrator | 2026-01-30 04:56:11.814917 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-01-30 04:56:11.814921 | orchestrator | Friday 30 January 2026 04:56:06 +0000 (0:00:00.299) 0:00:09.756 ******** 2026-01-30 04:56:11.814926 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.814930 | orchestrator | 2026-01-30 04:56:11.814934 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-01-30 04:56:11.814938 | orchestrator | Friday 30 January 2026 04:56:06 +0000 (0:00:00.104) 0:00:09.861 ******** 2026-01-30 04:56:11.814943 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814947 | orchestrator | 2026-01-30 04:56:11.814951 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-01-30 04:56:11.814956 | orchestrator | Friday 30 January 2026 04:56:06 +0000 (0:00:00.116) 0:00:09.978 ******** 2026-01-30 04:56:11.814960 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814964 | orchestrator | 2026-01-30 04:56:11.814968 | orchestrator | TASK [Gather status data] ****************************************************** 2026-01-30 04:56:11.814973 | orchestrator | Friday 30 January 2026 04:56:06 +0000 (0:00:00.102) 0:00:10.080 ******** 2026-01-30 04:56:11.814978 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:56:11.814982 | orchestrator | 2026-01-30 04:56:11.814986 | orchestrator | TASK [Set health test data] **************************************************** 2026-01-30 04:56:11.814991 | orchestrator | Friday 30 January 2026 04:56:07 +0000 (0:00:01.338) 0:00:11.418 ******** 2026-01-30 04:56:11.814995 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.814999 | orchestrator | 2026-01-30 04:56:11.815004 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-01-30 04:56:11.815008 | orchestrator | Friday 30 January 2026 04:56:07 +0000 (0:00:00.285) 0:00:11.704 ******** 2026-01-30 04:56:11.815013 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.815017 | orchestrator | 2026-01-30 04:56:11.815021 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-01-30 04:56:11.815025 | orchestrator | Friday 30 January 2026 04:56:08 +0000 (0:00:00.157) 0:00:11.862 ******** 2026-01-30 04:56:11.815029 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:11.815032 | orchestrator | 2026-01-30 04:56:11.815036 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-01-30 04:56:11.815040 | orchestrator | Friday 30 January 2026 04:56:08 +0000 (0:00:00.135) 0:00:11.997 ******** 2026-01-30 04:56:11.815044 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.815047 | orchestrator | 2026-01-30 04:56:11.815051 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-01-30 04:56:11.815055 | orchestrator | Friday 30 January 2026 04:56:08 +0000 (0:00:00.132) 0:00:12.130 ******** 2026-01-30 04:56:11.815061 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.815065 | orchestrator | 2026-01-30 04:56:11.815069 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-30 04:56:11.815073 | orchestrator | Friday 30 January 2026 04:56:08 +0000 (0:00:00.299) 0:00:12.429 ******** 2026-01-30 04:56:11.815076 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.815080 | orchestrator | 2026-01-30 04:56:11.815084 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-30 04:56:11.815088 | orchestrator | Friday 30 January 2026 04:56:08 +0000 (0:00:00.254) 0:00:12.684 ******** 2026-01-30 04:56:11.815095 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:11.815099 | orchestrator | 2026-01-30 04:56:11.815103 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:56:11.815107 | orchestrator | Friday 30 January 2026 04:56:09 +0000 (0:00:00.283) 0:00:12.967 ******** 2026-01-30 04:56:11.815110 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.815114 | orchestrator | 2026-01-30 04:56:11.815118 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:56:11.815122 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:01.811) 0:00:14.778 ******** 2026-01-30 04:56:11.815126 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.815129 | orchestrator | 2026-01-30 04:56:11.815133 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:56:11.815137 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:00.260) 0:00:15.039 ******** 2026-01-30 04:56:11.815141 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:11.815144 | orchestrator | 2026-01-30 04:56:11.815151 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:14.461418 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:00.259) 0:00:15.298 ******** 2026-01-30 04:56:14.461501 | orchestrator | 2026-01-30 04:56:14.461511 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:14.461518 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:00.083) 0:00:15.381 ******** 2026-01-30 04:56:14.461525 | orchestrator | 2026-01-30 04:56:14.461532 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:14.461538 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:00.069) 0:00:15.450 ******** 2026-01-30 04:56:14.461544 | orchestrator | 2026-01-30 04:56:14.461550 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-30 04:56:14.461557 | orchestrator | Friday 30 January 2026 04:56:11 +0000 (0:00:00.084) 0:00:15.535 ******** 2026-01-30 04:56:14.461563 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:14.461569 | orchestrator | 2026-01-30 04:56:14.461575 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:56:14.461583 | orchestrator | Friday 30 January 2026 04:56:13 +0000 (0:00:01.544) 0:00:17.080 ******** 2026-01-30 04:56:14.461591 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-30 04:56:14.461601 | orchestrator |  "msg": [ 2026-01-30 04:56:14.461611 | orchestrator |  "Validator run completed.", 2026-01-30 04:56:14.461620 | orchestrator |  "You can find the report file here:", 2026-01-30 04:56:14.461630 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-01-30T04:55:57+00:00-report.json", 2026-01-30 04:56:14.461640 | orchestrator |  "on the following host:", 2026-01-30 04:56:14.461649 | orchestrator |  "testbed-manager" 2026-01-30 04:56:14.461710 | orchestrator |  ] 2026-01-30 04:56:14.461721 | orchestrator | } 2026-01-30 04:56:14.461730 | orchestrator | 2026-01-30 04:56:14.461739 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:56:14.461749 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-01-30 04:56:14.461759 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:56:14.461768 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:56:14.461776 | orchestrator | 2026-01-30 04:56:14.461785 | orchestrator | 2026-01-30 04:56:14.461794 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:56:14.461802 | orchestrator | Friday 30 January 2026 04:56:14 +0000 (0:00:00.821) 0:00:17.901 ******** 2026-01-30 04:56:14.461834 | orchestrator | =============================================================================== 2026-01-30 04:56:14.461843 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-01-30 04:56:14.461851 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.60s 2026-01-30 04:56:14.461860 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-01-30 04:56:14.461868 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2026-01-30 04:56:14.461877 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-01-30 04:56:14.461886 | orchestrator | Create report output directory ------------------------------------------ 0.94s 2026-01-30 04:56:14.461896 | orchestrator | Print report file information ------------------------------------------- 0.82s 2026-01-30 04:56:14.461911 | orchestrator | Get timestamp for report file ------------------------------------------- 0.77s 2026-01-30 04:56:14.461925 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-01-30 04:56:14.461939 | orchestrator | Set quorum test data ---------------------------------------------------- 0.47s 2026-01-30 04:56:14.461953 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.44s 2026-01-30 04:56:14.461984 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.32s 2026-01-30 04:56:14.461999 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2026-01-30 04:56:14.462013 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.30s 2026-01-30 04:56:14.462094 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-01-30 04:56:14.462110 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2026-01-30 04:56:14.462125 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-01-30 04:56:14.462140 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-01-30 04:56:14.462155 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-01-30 04:56:14.462171 | orchestrator | Fail due to missing containers ------------------------------------------ 0.28s 2026-01-30 04:56:14.737949 | orchestrator | + osism validate ceph-mgrs 2026-01-30 04:56:44.480339 | orchestrator | 2026-01-30 04:56:44.480419 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-01-30 04:56:44.480427 | orchestrator | 2026-01-30 04:56:44.480432 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-30 04:56:44.480437 | orchestrator | Friday 30 January 2026 04:56:30 +0000 (0:00:00.325) 0:00:00.325 ******** 2026-01-30 04:56:44.480442 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.480446 | orchestrator | 2026-01-30 04:56:44.480450 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-30 04:56:44.480454 | orchestrator | Friday 30 January 2026 04:56:31 +0000 (0:00:00.721) 0:00:01.046 ******** 2026-01-30 04:56:44.480458 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.480462 | orchestrator | 2026-01-30 04:56:44.480466 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-30 04:56:44.480470 | orchestrator | Friday 30 January 2026 04:56:32 +0000 (0:00:00.796) 0:00:01.842 ******** 2026-01-30 04:56:44.480474 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480479 | orchestrator | 2026-01-30 04:56:44.480483 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-01-30 04:56:44.480487 | orchestrator | Friday 30 January 2026 04:56:32 +0000 (0:00:00.122) 0:00:01.965 ******** 2026-01-30 04:56:44.480490 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480494 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:44.480498 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:44.480502 | orchestrator | 2026-01-30 04:56:44.480506 | orchestrator | TASK [Get container info] ****************************************************** 2026-01-30 04:56:44.480509 | orchestrator | Friday 30 January 2026 04:56:32 +0000 (0:00:00.273) 0:00:02.239 ******** 2026-01-30 04:56:44.480528 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:44.480532 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480536 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:44.480540 | orchestrator | 2026-01-30 04:56:44.480543 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-01-30 04:56:44.480547 | orchestrator | Friday 30 January 2026 04:56:33 +0000 (0:00:01.047) 0:00:03.286 ******** 2026-01-30 04:56:44.480551 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480555 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:56:44.480559 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:56:44.480563 | orchestrator | 2026-01-30 04:56:44.480566 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-01-30 04:56:44.480570 | orchestrator | Friday 30 January 2026 04:56:34 +0000 (0:00:00.295) 0:00:03.582 ******** 2026-01-30 04:56:44.480575 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480579 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:44.480582 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:44.480586 | orchestrator | 2026-01-30 04:56:44.480590 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:56:44.480594 | orchestrator | Friday 30 January 2026 04:56:34 +0000 (0:00:00.521) 0:00:04.104 ******** 2026-01-30 04:56:44.480598 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480601 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:44.480605 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:44.480609 | orchestrator | 2026-01-30 04:56:44.480613 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-01-30 04:56:44.480617 | orchestrator | Friday 30 January 2026 04:56:34 +0000 (0:00:00.278) 0:00:04.382 ******** 2026-01-30 04:56:44.480620 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480624 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:56:44.480628 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:56:44.480632 | orchestrator | 2026-01-30 04:56:44.480636 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-01-30 04:56:44.480640 | orchestrator | Friday 30 January 2026 04:56:35 +0000 (0:00:00.248) 0:00:04.630 ******** 2026-01-30 04:56:44.480644 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480650 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:56:44.480757 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:56:44.480772 | orchestrator | 2026-01-30 04:56:44.480776 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:56:44.480780 | orchestrator | Friday 30 January 2026 04:56:35 +0000 (0:00:00.373) 0:00:05.004 ******** 2026-01-30 04:56:44.480784 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480788 | orchestrator | 2026-01-30 04:56:44.480791 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:56:44.480795 | orchestrator | Friday 30 January 2026 04:56:35 +0000 (0:00:00.231) 0:00:05.236 ******** 2026-01-30 04:56:44.480799 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480803 | orchestrator | 2026-01-30 04:56:44.480806 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:56:44.480810 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.246) 0:00:05.483 ******** 2026-01-30 04:56:44.480814 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480817 | orchestrator | 2026-01-30 04:56:44.480821 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.480825 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.224) 0:00:05.708 ******** 2026-01-30 04:56:44.480828 | orchestrator | 2026-01-30 04:56:44.480832 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.480836 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.065) 0:00:05.773 ******** 2026-01-30 04:56:44.480840 | orchestrator | 2026-01-30 04:56:44.480843 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.480847 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.063) 0:00:05.837 ******** 2026-01-30 04:56:44.480857 | orchestrator | 2026-01-30 04:56:44.480861 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:56:44.480865 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.065) 0:00:05.903 ******** 2026-01-30 04:56:44.480868 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480872 | orchestrator | 2026-01-30 04:56:44.480876 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-01-30 04:56:44.480879 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.226) 0:00:06.130 ******** 2026-01-30 04:56:44.480883 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480887 | orchestrator | 2026-01-30 04:56:44.480902 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-01-30 04:56:44.480907 | orchestrator | Friday 30 January 2026 04:56:36 +0000 (0:00:00.244) 0:00:06.374 ******** 2026-01-30 04:56:44.480910 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480914 | orchestrator | 2026-01-30 04:56:44.480918 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-01-30 04:56:44.480921 | orchestrator | Friday 30 January 2026 04:56:37 +0000 (0:00:00.128) 0:00:06.503 ******** 2026-01-30 04:56:44.480925 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:56:44.480929 | orchestrator | 2026-01-30 04:56:44.480933 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-01-30 04:56:44.480936 | orchestrator | Friday 30 January 2026 04:56:39 +0000 (0:00:02.097) 0:00:08.601 ******** 2026-01-30 04:56:44.480940 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480944 | orchestrator | 2026-01-30 04:56:44.480959 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-01-30 04:56:44.480963 | orchestrator | Friday 30 January 2026 04:56:39 +0000 (0:00:00.402) 0:00:09.003 ******** 2026-01-30 04:56:44.480967 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.480971 | orchestrator | 2026-01-30 04:56:44.480975 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-01-30 04:56:44.480978 | orchestrator | Friday 30 January 2026 04:56:39 +0000 (0:00:00.300) 0:00:09.304 ******** 2026-01-30 04:56:44.480982 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.480986 | orchestrator | 2026-01-30 04:56:44.480989 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-01-30 04:56:44.480993 | orchestrator | Friday 30 January 2026 04:56:39 +0000 (0:00:00.130) 0:00:09.434 ******** 2026-01-30 04:56:44.480997 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:56:44.481000 | orchestrator | 2026-01-30 04:56:44.481004 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-30 04:56:44.481008 | orchestrator | Friday 30 January 2026 04:56:40 +0000 (0:00:00.131) 0:00:09.566 ******** 2026-01-30 04:56:44.481011 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.481015 | orchestrator | 2026-01-30 04:56:44.481019 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-30 04:56:44.481022 | orchestrator | Friday 30 January 2026 04:56:40 +0000 (0:00:00.247) 0:00:09.814 ******** 2026-01-30 04:56:44.481026 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:56:44.481030 | orchestrator | 2026-01-30 04:56:44.481033 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:56:44.481037 | orchestrator | Friday 30 January 2026 04:56:40 +0000 (0:00:00.281) 0:00:10.095 ******** 2026-01-30 04:56:44.481041 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.481045 | orchestrator | 2026-01-30 04:56:44.481057 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:56:44.481061 | orchestrator | Friday 30 January 2026 04:56:41 +0000 (0:00:01.261) 0:00:11.356 ******** 2026-01-30 04:56:44.481064 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.481068 | orchestrator | 2026-01-30 04:56:44.481072 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:56:44.481076 | orchestrator | Friday 30 January 2026 04:56:42 +0000 (0:00:00.238) 0:00:11.595 ******** 2026-01-30 04:56:44.481083 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.481086 | orchestrator | 2026-01-30 04:56:44.481090 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.481094 | orchestrator | Friday 30 January 2026 04:56:42 +0000 (0:00:00.238) 0:00:11.833 ******** 2026-01-30 04:56:44.481098 | orchestrator | 2026-01-30 04:56:44.481101 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.481105 | orchestrator | Friday 30 January 2026 04:56:42 +0000 (0:00:00.067) 0:00:11.901 ******** 2026-01-30 04:56:44.481109 | orchestrator | 2026-01-30 04:56:44.481112 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:56:44.481116 | orchestrator | Friday 30 January 2026 04:56:42 +0000 (0:00:00.067) 0:00:11.968 ******** 2026-01-30 04:56:44.481120 | orchestrator | 2026-01-30 04:56:44.481124 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-30 04:56:44.481127 | orchestrator | Friday 30 January 2026 04:56:42 +0000 (0:00:00.237) 0:00:12.205 ******** 2026-01-30 04:56:44.481131 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-01-30 04:56:44.481135 | orchestrator | 2026-01-30 04:56:44.481138 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:56:44.481142 | orchestrator | Friday 30 January 2026 04:56:44 +0000 (0:00:01.329) 0:00:13.535 ******** 2026-01-30 04:56:44.481146 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-01-30 04:56:44.481150 | orchestrator |  "msg": [ 2026-01-30 04:56:44.481154 | orchestrator |  "Validator run completed.", 2026-01-30 04:56:44.481161 | orchestrator |  "You can find the report file here:", 2026-01-30 04:56:44.481164 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-01-30T04:56:31+00:00-report.json", 2026-01-30 04:56:44.481169 | orchestrator |  "on the following host:", 2026-01-30 04:56:44.481172 | orchestrator |  "testbed-manager" 2026-01-30 04:56:44.481176 | orchestrator |  ] 2026-01-30 04:56:44.481180 | orchestrator | } 2026-01-30 04:56:44.481184 | orchestrator | 2026-01-30 04:56:44.481188 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:56:44.481193 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-30 04:56:44.481198 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:56:44.481206 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:56:44.775822 | orchestrator | 2026-01-30 04:56:44.775928 | orchestrator | 2026-01-30 04:56:44.775943 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:56:44.775957 | orchestrator | Friday 30 January 2026 04:56:44 +0000 (0:00:00.406) 0:00:13.942 ******** 2026-01-30 04:56:44.775966 | orchestrator | =============================================================================== 2026-01-30 04:56:44.775975 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.10s 2026-01-30 04:56:44.775985 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-01-30 04:56:44.775994 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2026-01-30 04:56:44.776003 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-01-30 04:56:44.776012 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2026-01-30 04:56:44.776022 | orchestrator | Get timestamp for report file ------------------------------------------- 0.72s 2026-01-30 04:56:44.776031 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-01-30 04:56:44.776042 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-01-30 04:56:44.776079 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.40s 2026-01-30 04:56:44.776088 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.37s 2026-01-30 04:56:44.776094 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2026-01-30 04:56:44.776100 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.30s 2026-01-30 04:56:44.776106 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-01-30 04:56:44.776111 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-01-30 04:56:44.776117 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2026-01-30 04:56:44.776123 | orchestrator | Prepare test data for container existance test -------------------------- 0.27s 2026-01-30 04:56:44.776129 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.25s 2026-01-30 04:56:44.776135 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2026-01-30 04:56:44.776140 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2026-01-30 04:56:44.776146 | orchestrator | Fail due to missing containers ------------------------------------------ 0.24s 2026-01-30 04:56:45.120988 | orchestrator | + osism validate ceph-osds 2026-01-30 04:57:05.918157 | orchestrator | 2026-01-30 04:57:05.918254 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-01-30 04:57:05.918266 | orchestrator | 2026-01-30 04:57:05.918274 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-01-30 04:57:05.918283 | orchestrator | Friday 30 January 2026 04:57:01 +0000 (0:00:00.411) 0:00:00.411 ******** 2026-01-30 04:57:05.918292 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:05.918299 | orchestrator | 2026-01-30 04:57:05.918307 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-30 04:57:05.918315 | orchestrator | Friday 30 January 2026 04:57:02 +0000 (0:00:00.784) 0:00:01.196 ******** 2026-01-30 04:57:05.918323 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:05.918330 | orchestrator | 2026-01-30 04:57:05.918338 | orchestrator | TASK [Create report output directory] ****************************************** 2026-01-30 04:57:05.918345 | orchestrator | Friday 30 January 2026 04:57:02 +0000 (0:00:00.473) 0:00:01.669 ******** 2026-01-30 04:57:05.918353 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:05.918360 | orchestrator | 2026-01-30 04:57:05.918368 | orchestrator | TASK [Define report vars] ****************************************************** 2026-01-30 04:57:05.918375 | orchestrator | Friday 30 January 2026 04:57:03 +0000 (0:00:00.713) 0:00:02.383 ******** 2026-01-30 04:57:05.918383 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:05.918393 | orchestrator | 2026-01-30 04:57:05.918401 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-30 04:57:05.918408 | orchestrator | Friday 30 January 2026 04:57:03 +0000 (0:00:00.128) 0:00:02.512 ******** 2026-01-30 04:57:05.918416 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:05.918424 | orchestrator | 2026-01-30 04:57:05.918431 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-30 04:57:05.918439 | orchestrator | Friday 30 January 2026 04:57:03 +0000 (0:00:00.122) 0:00:02.635 ******** 2026-01-30 04:57:05.918446 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:05.918454 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:05.918461 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:05.918469 | orchestrator | 2026-01-30 04:57:05.918489 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-01-30 04:57:05.918497 | orchestrator | Friday 30 January 2026 04:57:04 +0000 (0:00:00.317) 0:00:02.952 ******** 2026-01-30 04:57:05.918505 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:05.918512 | orchestrator | 2026-01-30 04:57:05.918523 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-01-30 04:57:05.918557 | orchestrator | Friday 30 January 2026 04:57:04 +0000 (0:00:00.144) 0:00:03.097 ******** 2026-01-30 04:57:05.918570 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:05.918582 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:05.918592 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:05.918604 | orchestrator | 2026-01-30 04:57:05.918616 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-01-30 04:57:05.918627 | orchestrator | Friday 30 January 2026 04:57:04 +0000 (0:00:00.313) 0:00:03.410 ******** 2026-01-30 04:57:05.918639 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:05.918677 | orchestrator | 2026-01-30 04:57:05.918693 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:57:05.918707 | orchestrator | Friday 30 January 2026 04:57:05 +0000 (0:00:00.720) 0:00:04.130 ******** 2026-01-30 04:57:05.918723 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:05.918736 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:05.918749 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:05.918762 | orchestrator | 2026-01-30 04:57:05.918774 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-01-30 04:57:05.918787 | orchestrator | Friday 30 January 2026 04:57:05 +0000 (0:00:00.280) 0:00:04.411 ******** 2026-01-30 04:57:05.918803 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fe2e0a1d2dbb70dcfd11f9eaebc2fa8387500272ebf677d2361808213e46c2a6', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:05.918819 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bfd260189fef6a5ba3253c6db4cacf7a4a12cdbc989e1118ecb4db8e2c9b6220', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:05.918833 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2dcfd327db92dc14b33f7e4082cc40787bc14137725dc8fb3983a020f244d488', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-30 04:57:05.918847 | orchestrator | skipping: [testbed-node-3] => (item={'id': '28c5a318e8e335ca2f7064bc7af2fe9e39d19ffdafa7cd8715cecdcdb81568ff', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-01-30 04:57:05.918860 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71743be2c43eff6edfe03a628058de0fca26074aa5b048a3ca00c85d7d6a73b2', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:05.918904 | orchestrator | skipping: [testbed-node-3] => (item={'id': '236f2a31e8db9de859bf05ced0aec2a201b3fe1d403e7a805a6a24cf9d6620ca', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:05.918921 | orchestrator | skipping: [testbed-node-3] => (item={'id': '334578d25eb18187fd9abb8e9266b1309cdcf30a59bc44f68d8eae5306b2c585', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-30 04:57:05.918935 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a66ee71cd39f6fb198adc91fea65a7be2883d4d0f4c9f89e5deab2ea9383d487', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-01-30 04:57:05.918948 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7103cfa15f49d7af453cab2ab14fad2249bb490e7872d0585882042ab03e47a7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:05.918969 | orchestrator | skipping: [testbed-node-3] => (item={'id': '985637d292bb1a6daea6fc4a5daba65f4f10a601d247dc523102ede6ca315030', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:05.918979 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c124df40edcbd934cc5964192137e17eb3cf10357242225d1e35995aaae56d51', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:05.918989 | orchestrator | ok: [testbed-node-3] => (item={'id': '370da41073752dde7c5f20d86b7022c4e12afa7e94e9609ee19b5380cce01e2e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:05.918999 | orchestrator | ok: [testbed-node-3] => (item={'id': 'ae30a2d67df837d63506fa058373d06d2a1f3b1ae0dda1f4b928652313429ba8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:05.919007 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a4330578db615c9d98c2c537e2b33a4cb4bbfba80873e2477a808edc80757f22', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:05.919015 | orchestrator | skipping: [testbed-node-3] => (item={'id': '139a5af978c66cb20f61f63fb449d8c03b8c85531d8fa4575bef1c84e0cb634c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:05.919023 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9e1a913b9af95ae3ea65f724210c0895f980e6eb061dff42039a319fc5791b6e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:05.919030 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7b3fd3b36112b390aa248882f3136234b9b6ab685b75a374ae29bb06f880d0b', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:05.919038 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fa916b51aea70b25723dab76484a9bbcf6e663703c104399b3d2e6ca4b2581a8', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:05.919045 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd84cbc22b4f7c250df98bd5d86ee053f4793f901e54ecda6654a3055400be780', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:05.919053 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83d6f9cc5398cc59f21928bc3caebb5ac43b04bf9b77e025b16bc1de12f43fd5', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:05.919068 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e139c7aba4c7389e945719b82f1de3f0d5c8e3eb641934db31f0d5cd15b8ea26', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:06.151055 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f8be895a7296427393028d8b915c8274bf8615fc32f9372a8bb0975eb229b98', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-30 04:57:06.151206 | orchestrator | skipping: [testbed-node-4] => (item={'id': '532c0494255723b7e7bd695e617a1956e4636911476faaf6bab1648a7ddb72c9', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-01-30 04:57:06.151244 | orchestrator | skipping: [testbed-node-4] => (item={'id': '87e260d87a680c041ab5a95bd73dcc3d4235680e5f732070193e10c1195ee4d8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:06.151259 | orchestrator | skipping: [testbed-node-4] => (item={'id': '57c440003422c8bc1939fea5ef6b25cadd521fb3e4d199b61b48713f18ff5d58', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:06.151275 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a6c2b2391b384125e62500de38990075f27dfd4a13e77b0a0ed32b78e596bc63', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-30 04:57:06.151287 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd6bfb1d3218f10117eab56ee2dca6ab1ac26e3a0b74c3c9acd5ada1aee41c575', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-01-30 04:57:06.151299 | orchestrator | skipping: [testbed-node-4] => (item={'id': '865726bcb7068b321cc071af46fb3ae92ff806a4e38326e0c150b055d564bb1b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151311 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac27f4dc78f1c2d167b719d96ac7cc1cbb579f52e06e1039dfbf9b79f7082d50', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151323 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fd7865f1c4e4672a07162378053d51ba14bc4b4d0d04001596bc43e2f72b8a29', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151337 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c4d9e72c9153afd4541569595870af5379df53d43d710d049d0f959f430c2ff4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:06.151349 | orchestrator | ok: [testbed-node-4] => (item={'id': '62d3ae145248a0d1d56f3d0d6491a3af5afad6040cab9fc611664e743b5f51bd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:06.151360 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e78d6e877d3c98b0b4b22ac942905e10953a8908be8512949281fde47dff47c5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151371 | orchestrator | skipping: [testbed-node-4] => (item={'id': '32b9262fc9b78d4829f08df39774c600d271bf9fc62ac2b2f4c1b09396f3ea65', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:06.151383 | orchestrator | skipping: [testbed-node-4] => (item={'id': '313aff1762950f3e599bba606ea3093e9051be921c4398da46b2d1215ae08b9f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:06.151412 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a3814c9e4965cd3600c038b91f63610437463a93c7e3648a5047e5e77ebf853', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:06.151432 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c15d0579153cf20cc571d1dd9350d7504d0bcfb5ddf93ae2901e0b496e780341', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:06.151444 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fbb1e0c1d06b8cb1799be583c2d6b446235c35057e5a4969538e7c129ca1e45b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:06.151455 | orchestrator | skipping: [testbed-node-5] => (item={'id': '621d86944e5e5c9208141afe9a8ddf82084f3fcfb6d2ecd90a462fde4a9df641', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:06.151466 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff7bac16da423d8e7f676850bfdb21179f27e4f46ef97b133e71229c4472dc78', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 9 minutes'})  2026-01-30 04:57:06.151482 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04dcac2a9fcf68346e66aae1e88e76b7576a30540866e457836d6342b3ed3858', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-01-30 04:57:06.151494 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c601e8eb3cb9d4d1d05aa8db42c7d9289bda2ff1931f563b78c16708cf2cf29', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-01-30 04:57:06.151505 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf6042d29dce60dfe5326c3f1f008b7ffa5cd8afacbc01b8593b5ba37f08a750', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:06.151517 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b5f0eb8489066eb9dcee5eff7a9d4f2d13322e4681051138bfcd9a304da21f4b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-01-30 04:57:06.151528 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a7f674778ac16a6b04ec584b5e1ae82e38537b88e4123064ea48534db1e1f925', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-01-30 04:57:06.151539 | orchestrator | skipping: [testbed-node-5] => (item={'id': '45595e4f494ad21ea8de3ee015d2d21c036786b19748deaae05547dab8cea345', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-01-30 04:57:06.151550 | orchestrator | skipping: [testbed-node-5] => (item={'id': '77fe9bcfe8680cb8fe8de9f45381e7c4bb48c920f601ce1ddb2c93e21872773a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151562 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fe85b2e8e29b448919aa3e98331a7079f567e5d4d398ad1881b16f37fd652230', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151573 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a7f39be482ccf4c67fc091b4630a770c15b0d6097c20c231ab7ecc7ddfadd5f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:06.151590 | orchestrator | ok: [testbed-node-5] => (item={'id': '0b297387ab623f1ef610326aec2faceae5f32bf7e4b5977bf10c91ed6a452b10', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:06.151609 | orchestrator | ok: [testbed-node-5] => (item={'id': '9262fb52650a6b8a7505ce37ecc5192f7bd9e3750ca4307be3b2fff8e2398b70', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-01-30 04:57:16.928051 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a69d56c440f258cee0872acd7ff824a3ba4fa399a5778c79ad9c27654ac528a8', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-01-30 04:57:16.928132 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41e64725f7a16e5dc886f0f8f507034cb423eed8e366d155485446ee941b9c2f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:16.928140 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fad9da8bc89b3df4690957f8bac87a1ba5d08bb96e1e732f8cb47c88ca0adefe', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-01-30 04:57:16.928146 | orchestrator | skipping: [testbed-node-5] => (item={'id': '18e9e06e7272eafafebaa4a41c8ef9634dc6cfc1f76678b1703ae5db9a77f106', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:16.928162 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd92be54ba817a5b39f2a4e3848700a7ef99ea07e539719c166f97fd9a0e067a1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:16.928167 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a1ed9f1aeb01a429639b7e7a994a285e408766cc5a9b3e3a5d14a9ce9806ddcf', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-01-30 04:57:16.928171 | orchestrator | 2026-01-30 04:57:16.928176 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-01-30 04:57:16.928182 | orchestrator | Friday 30 January 2026 04:57:06 +0000 (0:00:00.498) 0:00:04.910 ******** 2026-01-30 04:57:16.928185 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928190 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928194 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928198 | orchestrator | 2026-01-30 04:57:16.928201 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-01-30 04:57:16.928205 | orchestrator | Friday 30 January 2026 04:57:06 +0000 (0:00:00.282) 0:00:05.192 ******** 2026-01-30 04:57:16.928209 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928213 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:16.928217 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:16.928221 | orchestrator | 2026-01-30 04:57:16.928225 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-01-30 04:57:16.928228 | orchestrator | Friday 30 January 2026 04:57:06 +0000 (0:00:00.443) 0:00:05.636 ******** 2026-01-30 04:57:16.928232 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928236 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928240 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928243 | orchestrator | 2026-01-30 04:57:16.928247 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:57:16.928251 | orchestrator | Friday 30 January 2026 04:57:07 +0000 (0:00:00.298) 0:00:05.934 ******** 2026-01-30 04:57:16.928255 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928258 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928262 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928278 | orchestrator | 2026-01-30 04:57:16.928282 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-01-30 04:57:16.928286 | orchestrator | Friday 30 January 2026 04:57:07 +0000 (0:00:00.264) 0:00:06.199 ******** 2026-01-30 04:57:16.928290 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-01-30 04:57:16.928295 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-01-30 04:57:16.928299 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928302 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-01-30 04:57:16.928306 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-01-30 04:57:16.928310 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:16.928314 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-01-30 04:57:16.928317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-01-30 04:57:16.928321 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:16.928325 | orchestrator | 2026-01-30 04:57:16.928329 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-01-30 04:57:16.928332 | orchestrator | Friday 30 January 2026 04:57:07 +0000 (0:00:00.298) 0:00:06.497 ******** 2026-01-30 04:57:16.928336 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928340 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928344 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928347 | orchestrator | 2026-01-30 04:57:16.928351 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-30 04:57:16.928355 | orchestrator | Friday 30 January 2026 04:57:08 +0000 (0:00:00.461) 0:00:06.959 ******** 2026-01-30 04:57:16.928359 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928372 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:16.928376 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:16.928380 | orchestrator | 2026-01-30 04:57:16.928384 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-01-30 04:57:16.928388 | orchestrator | Friday 30 January 2026 04:57:08 +0000 (0:00:00.275) 0:00:07.234 ******** 2026-01-30 04:57:16.928391 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928395 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:16.928399 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:16.928403 | orchestrator | 2026-01-30 04:57:16.928407 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-01-30 04:57:16.928410 | orchestrator | Friday 30 January 2026 04:57:08 +0000 (0:00:00.271) 0:00:07.506 ******** 2026-01-30 04:57:16.928414 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928418 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928422 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928425 | orchestrator | 2026-01-30 04:57:16.928429 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:57:16.928433 | orchestrator | Friday 30 January 2026 04:57:09 +0000 (0:00:00.518) 0:00:08.025 ******** 2026-01-30 04:57:16.928437 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928440 | orchestrator | 2026-01-30 04:57:16.928444 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:57:16.928448 | orchestrator | Friday 30 January 2026 04:57:09 +0000 (0:00:00.265) 0:00:08.290 ******** 2026-01-30 04:57:16.928452 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928455 | orchestrator | 2026-01-30 04:57:16.928459 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:57:16.928463 | orchestrator | Friday 30 January 2026 04:57:09 +0000 (0:00:00.241) 0:00:08.532 ******** 2026-01-30 04:57:16.928467 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928471 | orchestrator | 2026-01-30 04:57:16.928475 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:16.928482 | orchestrator | Friday 30 January 2026 04:57:09 +0000 (0:00:00.238) 0:00:08.770 ******** 2026-01-30 04:57:16.928486 | orchestrator | 2026-01-30 04:57:16.928490 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:16.928494 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.070) 0:00:08.840 ******** 2026-01-30 04:57:16.928498 | orchestrator | 2026-01-30 04:57:16.928501 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:16.928505 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.067) 0:00:08.908 ******** 2026-01-30 04:57:16.928509 | orchestrator | 2026-01-30 04:57:16.928513 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:57:16.928517 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.071) 0:00:08.980 ******** 2026-01-30 04:57:16.928520 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928524 | orchestrator | 2026-01-30 04:57:16.928528 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-01-30 04:57:16.928532 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.236) 0:00:09.216 ******** 2026-01-30 04:57:16.928535 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928539 | orchestrator | 2026-01-30 04:57:16.928543 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:57:16.928547 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.237) 0:00:09.453 ******** 2026-01-30 04:57:16.928550 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928554 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928558 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928562 | orchestrator | 2026-01-30 04:57:16.928566 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-01-30 04:57:16.928569 | orchestrator | Friday 30 January 2026 04:57:10 +0000 (0:00:00.283) 0:00:09.736 ******** 2026-01-30 04:57:16.928573 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928577 | orchestrator | 2026-01-30 04:57:16.928581 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-01-30 04:57:16.928584 | orchestrator | Friday 30 January 2026 04:57:11 +0000 (0:00:00.587) 0:00:10.324 ******** 2026-01-30 04:57:16.928588 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 04:57:16.928592 | orchestrator | 2026-01-30 04:57:16.928596 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-01-30 04:57:16.928600 | orchestrator | Friday 30 January 2026 04:57:13 +0000 (0:00:01.732) 0:00:12.057 ******** 2026-01-30 04:57:16.928603 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928607 | orchestrator | 2026-01-30 04:57:16.928611 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-01-30 04:57:16.928615 | orchestrator | Friday 30 January 2026 04:57:13 +0000 (0:00:00.134) 0:00:12.191 ******** 2026-01-30 04:57:16.928619 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928624 | orchestrator | 2026-01-30 04:57:16.928628 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-01-30 04:57:16.928632 | orchestrator | Friday 30 January 2026 04:57:13 +0000 (0:00:00.303) 0:00:12.495 ******** 2026-01-30 04:57:16.928636 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:16.928641 | orchestrator | 2026-01-30 04:57:16.928645 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-01-30 04:57:16.928688 | orchestrator | Friday 30 January 2026 04:57:13 +0000 (0:00:00.122) 0:00:12.618 ******** 2026-01-30 04:57:16.928694 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928698 | orchestrator | 2026-01-30 04:57:16.928702 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:57:16.928707 | orchestrator | Friday 30 January 2026 04:57:13 +0000 (0:00:00.131) 0:00:12.749 ******** 2026-01-30 04:57:16.928711 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:16.928715 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:16.928719 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:16.928728 | orchestrator | 2026-01-30 04:57:16.928733 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-01-30 04:57:16.928737 | orchestrator | Friday 30 January 2026 04:57:14 +0000 (0:00:00.307) 0:00:13.057 ******** 2026-01-30 04:57:16.928742 | orchestrator | changed: [testbed-node-3] 2026-01-30 04:57:16.928746 | orchestrator | changed: [testbed-node-4] 2026-01-30 04:57:16.928751 | orchestrator | changed: [testbed-node-5] 2026-01-30 04:57:26.955561 | orchestrator | 2026-01-30 04:57:26.955722 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-01-30 04:57:26.955735 | orchestrator | Friday 30 January 2026 04:57:16 +0000 (0:00:02.627) 0:00:15.684 ******** 2026-01-30 04:57:26.955742 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.955750 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.955756 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.955762 | orchestrator | 2026-01-30 04:57:26.955768 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-01-30 04:57:26.955775 | orchestrator | Friday 30 January 2026 04:57:17 +0000 (0:00:00.331) 0:00:16.015 ******** 2026-01-30 04:57:26.955781 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.955787 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.955793 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.955799 | orchestrator | 2026-01-30 04:57:26.955805 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-01-30 04:57:26.955811 | orchestrator | Friday 30 January 2026 04:57:17 +0000 (0:00:00.514) 0:00:16.529 ******** 2026-01-30 04:57:26.955817 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:26.955824 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:26.955830 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:26.955836 | orchestrator | 2026-01-30 04:57:26.955842 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-01-30 04:57:26.955848 | orchestrator | Friday 30 January 2026 04:57:18 +0000 (0:00:00.327) 0:00:16.857 ******** 2026-01-30 04:57:26.955854 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.955860 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.955865 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.955871 | orchestrator | 2026-01-30 04:57:26.955877 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-01-30 04:57:26.955886 | orchestrator | Friday 30 January 2026 04:57:18 +0000 (0:00:00.493) 0:00:17.350 ******** 2026-01-30 04:57:26.955893 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:26.955898 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:26.955904 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:26.955910 | orchestrator | 2026-01-30 04:57:26.955917 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-01-30 04:57:26.955923 | orchestrator | Friday 30 January 2026 04:57:18 +0000 (0:00:00.295) 0:00:17.646 ******** 2026-01-30 04:57:26.955929 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:26.955935 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:26.955941 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:26.955947 | orchestrator | 2026-01-30 04:57:26.955953 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-01-30 04:57:26.955959 | orchestrator | Friday 30 January 2026 04:57:19 +0000 (0:00:00.283) 0:00:17.930 ******** 2026-01-30 04:57:26.955965 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.955971 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.955977 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.955983 | orchestrator | 2026-01-30 04:57:26.955989 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-01-30 04:57:26.955995 | orchestrator | Friday 30 January 2026 04:57:19 +0000 (0:00:00.514) 0:00:18.444 ******** 2026-01-30 04:57:26.956001 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.956007 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.956012 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.956018 | orchestrator | 2026-01-30 04:57:26.956024 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-01-30 04:57:26.956045 | orchestrator | Friday 30 January 2026 04:57:20 +0000 (0:00:00.706) 0:00:19.150 ******** 2026-01-30 04:57:26.956052 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.956057 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.956063 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.956069 | orchestrator | 2026-01-30 04:57:26.956075 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-01-30 04:57:26.956081 | orchestrator | Friday 30 January 2026 04:57:20 +0000 (0:00:00.322) 0:00:19.473 ******** 2026-01-30 04:57:26.956087 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:26.956093 | orchestrator | skipping: [testbed-node-4] 2026-01-30 04:57:26.956099 | orchestrator | skipping: [testbed-node-5] 2026-01-30 04:57:26.956104 | orchestrator | 2026-01-30 04:57:26.956110 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-01-30 04:57:26.956116 | orchestrator | Friday 30 January 2026 04:57:20 +0000 (0:00:00.292) 0:00:19.766 ******** 2026-01-30 04:57:26.956122 | orchestrator | ok: [testbed-node-3] 2026-01-30 04:57:26.956128 | orchestrator | ok: [testbed-node-4] 2026-01-30 04:57:26.956134 | orchestrator | ok: [testbed-node-5] 2026-01-30 04:57:26.956140 | orchestrator | 2026-01-30 04:57:26.956146 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-01-30 04:57:26.956152 | orchestrator | Friday 30 January 2026 04:57:21 +0000 (0:00:00.490) 0:00:20.257 ******** 2026-01-30 04:57:26.956158 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:26.956164 | orchestrator | 2026-01-30 04:57:26.956170 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-01-30 04:57:26.956175 | orchestrator | Friday 30 January 2026 04:57:21 +0000 (0:00:00.293) 0:00:20.550 ******** 2026-01-30 04:57:26.956181 | orchestrator | skipping: [testbed-node-3] 2026-01-30 04:57:26.956187 | orchestrator | 2026-01-30 04:57:26.956193 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-01-30 04:57:26.956199 | orchestrator | Friday 30 January 2026 04:57:22 +0000 (0:00:00.252) 0:00:20.803 ******** 2026-01-30 04:57:26.956205 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:26.956211 | orchestrator | 2026-01-30 04:57:26.956217 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-01-30 04:57:26.956223 | orchestrator | Friday 30 January 2026 04:57:23 +0000 (0:00:01.580) 0:00:22.383 ******** 2026-01-30 04:57:26.956229 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:26.956234 | orchestrator | 2026-01-30 04:57:26.956241 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-01-30 04:57:26.956247 | orchestrator | Friday 30 January 2026 04:57:23 +0000 (0:00:00.261) 0:00:22.644 ******** 2026-01-30 04:57:26.956253 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:26.956258 | orchestrator | 2026-01-30 04:57:26.956278 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:26.956284 | orchestrator | Friday 30 January 2026 04:57:24 +0000 (0:00:00.266) 0:00:22.911 ******** 2026-01-30 04:57:26.956290 | orchestrator | 2026-01-30 04:57:26.956296 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:26.956302 | orchestrator | Friday 30 January 2026 04:57:24 +0000 (0:00:00.067) 0:00:22.978 ******** 2026-01-30 04:57:26.956307 | orchestrator | 2026-01-30 04:57:26.956313 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-01-30 04:57:26.956319 | orchestrator | Friday 30 January 2026 04:57:24 +0000 (0:00:00.066) 0:00:23.045 ******** 2026-01-30 04:57:26.956325 | orchestrator | 2026-01-30 04:57:26.956331 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-01-30 04:57:26.956337 | orchestrator | Friday 30 January 2026 04:57:24 +0000 (0:00:00.069) 0:00:23.114 ******** 2026-01-30 04:57:26.956343 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-30 04:57:26.956348 | orchestrator | 2026-01-30 04:57:26.956354 | orchestrator | TASK [Print report file information] ******************************************* 2026-01-30 04:57:26.956364 | orchestrator | Friday 30 January 2026 04:57:25 +0000 (0:00:01.535) 0:00:24.650 ******** 2026-01-30 04:57:26.956370 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-01-30 04:57:26.956376 | orchestrator |  "msg": [ 2026-01-30 04:57:26.956383 | orchestrator |  "Validator run completed.", 2026-01-30 04:57:26.956392 | orchestrator |  "You can find the report file here:", 2026-01-30 04:57:26.956402 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-01-30T04:57:02+00:00-report.json", 2026-01-30 04:57:26.956414 | orchestrator |  "on the following host:", 2026-01-30 04:57:26.956420 | orchestrator |  "testbed-manager" 2026-01-30 04:57:26.956426 | orchestrator |  ] 2026-01-30 04:57:26.956433 | orchestrator | } 2026-01-30 04:57:26.956439 | orchestrator | 2026-01-30 04:57:26.956445 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:57:26.956452 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 04:57:26.956460 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-30 04:57:26.956466 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-30 04:57:26.956472 | orchestrator | 2026-01-30 04:57:26.956478 | orchestrator | 2026-01-30 04:57:26.956484 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:57:26.956490 | orchestrator | Friday 30 January 2026 04:57:26 +0000 (0:00:00.805) 0:00:25.455 ******** 2026-01-30 04:57:26.956496 | orchestrator | =============================================================================== 2026-01-30 04:57:26.956502 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.63s 2026-01-30 04:57:26.956508 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.73s 2026-01-30 04:57:26.956513 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2026-01-30 04:57:26.956519 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-01-30 04:57:26.956525 | orchestrator | Print report file information ------------------------------------------- 0.81s 2026-01-30 04:57:26.956531 | orchestrator | Get timestamp for report file ------------------------------------------- 0.78s 2026-01-30 04:57:26.956537 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.72s 2026-01-30 04:57:26.956543 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-01-30 04:57:26.956548 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.71s 2026-01-30 04:57:26.956554 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.59s 2026-01-30 04:57:26.956560 | orchestrator | Set test result to passed if all containers are running ----------------- 0.52s 2026-01-30 04:57:26.956566 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-01-30 04:57:26.956572 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-01-30 04:57:26.956578 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2026-01-30 04:57:26.956583 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2026-01-30 04:57:26.956589 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.49s 2026-01-30 04:57:26.956595 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.47s 2026-01-30 04:57:26.956601 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.46s 2026-01-30 04:57:26.956607 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.44s 2026-01-30 04:57:26.956613 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.33s 2026-01-30 04:57:27.218790 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-01-30 04:57:27.225904 | orchestrator | + set -e 2026-01-30 04:57:27.226002 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 04:57:27.226076 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 04:57:27.226092 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 04:57:27.226105 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 04:57:27.226117 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 04:57:27.226129 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 04:57:27.226141 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 04:57:27.226153 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:57:27.226165 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:57:27.226176 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 04:57:27.226188 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 04:57:27.226199 | orchestrator | ++ export ARA=false 2026-01-30 04:57:27.226211 | orchestrator | ++ ARA=false 2026-01-30 04:57:27.226223 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 04:57:27.226234 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 04:57:27.226246 | orchestrator | ++ export TEMPEST=false 2026-01-30 04:57:27.226257 | orchestrator | ++ TEMPEST=false 2026-01-30 04:57:27.226269 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 04:57:27.226281 | orchestrator | ++ IS_ZUUL=true 2026-01-30 04:57:27.226293 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:57:27.226305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 04:57:27.226318 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 04:57:27.226329 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 04:57:27.226341 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 04:57:27.226354 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 04:57:27.226366 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 04:57:27.226377 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 04:57:27.226389 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 04:57:27.226401 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 04:57:27.226413 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-30 04:57:27.226425 | orchestrator | + source /etc/os-release 2026-01-30 04:57:27.226437 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-01-30 04:57:27.226449 | orchestrator | ++ NAME=Ubuntu 2026-01-30 04:57:27.226460 | orchestrator | ++ VERSION_ID=24.04 2026-01-30 04:57:27.226472 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-01-30 04:57:27.226484 | orchestrator | ++ VERSION_CODENAME=noble 2026-01-30 04:57:27.226496 | orchestrator | ++ ID=ubuntu 2026-01-30 04:57:27.226508 | orchestrator | ++ ID_LIKE=debian 2026-01-30 04:57:27.226520 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-01-30 04:57:27.226532 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-01-30 04:57:27.226544 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-01-30 04:57:27.226556 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-01-30 04:57:27.226568 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-01-30 04:57:27.226580 | orchestrator | ++ LOGO=ubuntu-logo 2026-01-30 04:57:27.226592 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-01-30 04:57:27.226606 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-01-30 04:57:27.226618 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-30 04:57:27.239514 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-01-30 04:57:46.353309 | orchestrator | 2026-01-30 04:57:46.353452 | orchestrator | # Status of Elasticsearch 2026-01-30 04:57:46.353481 | orchestrator | 2026-01-30 04:57:46.353501 | orchestrator | + pushd /opt/configuration/contrib 2026-01-30 04:57:46.353550 | orchestrator | + echo 2026-01-30 04:57:46.353570 | orchestrator | + echo '# Status of Elasticsearch' 2026-01-30 04:57:46.353700 | orchestrator | + echo 2026-01-30 04:57:46.353741 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-01-30 04:57:46.529744 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-01-30 04:57:46.530137 | orchestrator | 2026-01-30 04:57:46.530158 | orchestrator | # Status of MariaDB 2026-01-30 04:57:46.530165 | orchestrator | 2026-01-30 04:57:46.530170 | orchestrator | + echo 2026-01-30 04:57:46.530176 | orchestrator | + echo '# Status of MariaDB' 2026-01-30 04:57:46.530200 | orchestrator | + echo 2026-01-30 04:57:46.531035 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-30 04:57:46.598786 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 04:57:46.598900 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-30 04:57:46.598926 | orchestrator | + MARIADB_USER=root_shard_0 2026-01-30 04:57:46.598947 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-01-30 04:57:46.674996 | orchestrator | Reading package lists... 2026-01-30 04:57:47.029580 | orchestrator | Building dependency tree... 2026-01-30 04:57:47.030237 | orchestrator | Reading state information... 2026-01-30 04:57:47.415977 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-01-30 04:57:47.416061 | orchestrator | bc set to manually installed. 2026-01-30 04:57:47.416072 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. 2026-01-30 04:57:48.052744 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-01-30 04:57:48.054867 | orchestrator | 2026-01-30 04:57:48.054919 | orchestrator | # Status of Prometheus 2026-01-30 04:57:48.054956 | orchestrator | 2026-01-30 04:57:48.054991 | orchestrator | + echo 2026-01-30 04:57:48.055026 | orchestrator | + echo '# Status of Prometheus' 2026-01-30 04:57:48.055062 | orchestrator | + echo 2026-01-30 04:57:48.055097 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-01-30 04:57:48.117269 | orchestrator | Unauthorized 2026-01-30 04:57:48.120783 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-01-30 04:57:48.184014 | orchestrator | Unauthorized 2026-01-30 04:57:48.187020 | orchestrator | 2026-01-30 04:57:48.187080 | orchestrator | # Status of RabbitMQ 2026-01-30 04:57:48.187094 | orchestrator | 2026-01-30 04:57:48.187105 | orchestrator | + echo 2026-01-30 04:57:48.187116 | orchestrator | + echo '# Status of RabbitMQ' 2026-01-30 04:57:48.187128 | orchestrator | + echo 2026-01-30 04:57:48.188201 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-30 04:57:48.259060 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 04:57:48.259174 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-30 04:57:48.259201 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-01-30 04:57:48.727096 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-01-30 04:57:48.735675 | orchestrator | 2026-01-30 04:57:48.735749 | orchestrator | # Status of Redis 2026-01-30 04:57:48.735760 | orchestrator | 2026-01-30 04:57:48.735768 | orchestrator | + echo 2026-01-30 04:57:48.735776 | orchestrator | + echo '# Status of Redis' 2026-01-30 04:57:48.735784 | orchestrator | + echo 2026-01-30 04:57:48.735793 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-01-30 04:57:48.744380 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002999s;;;0.000000;10.000000 2026-01-30 04:57:48.744800 | orchestrator | + popd 2026-01-30 04:57:48.744977 | orchestrator | 2026-01-30 04:57:48.744994 | orchestrator | # Create backup of MariaDB database 2026-01-30 04:57:48.745002 | orchestrator | 2026-01-30 04:57:48.745010 | orchestrator | + echo 2026-01-30 04:57:48.745017 | orchestrator | + echo '# Create backup of MariaDB database' 2026-01-30 04:57:48.745024 | orchestrator | + echo 2026-01-30 04:57:48.745031 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-01-30 04:57:50.742845 | orchestrator | 2026-01-30 04:57:50 | INFO  | Task d7674ba4-2bca-43c5-a891-9c6b2c68add8 (mariadb_backup) was prepared for execution. 2026-01-30 04:57:50.742970 | orchestrator | 2026-01-30 04:57:50 | INFO  | It takes a moment until task d7674ba4-2bca-43c5-a891-9c6b2c68add8 (mariadb_backup) has been started and output is visible here. 2026-01-30 04:58:18.885250 | orchestrator | 2026-01-30 04:58:18.885384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 04:58:18.885403 | orchestrator | 2026-01-30 04:58:18.885416 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 04:58:18.885427 | orchestrator | Friday 30 January 2026 04:57:54 +0000 (0:00:00.129) 0:00:00.129 ******** 2026-01-30 04:58:18.885438 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:58:18.885450 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:58:18.885461 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:58:18.885472 | orchestrator | 2026-01-30 04:58:18.885483 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 04:58:18.885594 | orchestrator | Friday 30 January 2026 04:57:54 +0000 (0:00:00.257) 0:00:00.387 ******** 2026-01-30 04:58:18.885617 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-30 04:58:18.885637 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-30 04:58:18.885656 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-30 04:58:18.885674 | orchestrator | 2026-01-30 04:58:18.885685 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-30 04:58:18.885696 | orchestrator | 2026-01-30 04:58:18.885706 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-30 04:58:18.885725 | orchestrator | Friday 30 January 2026 04:57:55 +0000 (0:00:00.538) 0:00:00.925 ******** 2026-01-30 04:58:18.885752 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 04:58:18.885772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 04:58:18.885790 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 04:58:18.885808 | orchestrator | 2026-01-30 04:58:18.885826 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 04:58:18.885844 | orchestrator | Friday 30 January 2026 04:57:55 +0000 (0:00:00.363) 0:00:01.289 ******** 2026-01-30 04:58:18.885863 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 04:58:18.885883 | orchestrator | 2026-01-30 04:58:18.885903 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-01-30 04:58:18.885941 | orchestrator | Friday 30 January 2026 04:57:56 +0000 (0:00:00.469) 0:00:01.758 ******** 2026-01-30 04:58:18.885963 | orchestrator | ok: [testbed-node-0] 2026-01-30 04:58:18.885980 | orchestrator | ok: [testbed-node-1] 2026-01-30 04:58:18.885997 | orchestrator | ok: [testbed-node-2] 2026-01-30 04:58:18.886091 | orchestrator | 2026-01-30 04:58:18.886116 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-01-30 04:58:18.886129 | orchestrator | Friday 30 January 2026 04:57:59 +0000 (0:00:03.001) 0:00:04.759 ******** 2026-01-30 04:58:18.886141 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-30 04:58:18.886151 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-30 04:58:18.886163 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-30 04:58:18.886174 | orchestrator | mariadb_bootstrap_restart 2026-01-30 04:58:18.886185 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:58:18.886196 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:58:18.886242 | orchestrator | changed: [testbed-node-0] 2026-01-30 04:58:18.886253 | orchestrator | 2026-01-30 04:58:18.886264 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-30 04:58:18.886275 | orchestrator | skipping: no hosts matched 2026-01-30 04:58:18.886285 | orchestrator | 2026-01-30 04:58:18.886296 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-30 04:58:18.886307 | orchestrator | skipping: no hosts matched 2026-01-30 04:58:18.886317 | orchestrator | 2026-01-30 04:58:18.886328 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-30 04:58:18.886339 | orchestrator | skipping: no hosts matched 2026-01-30 04:58:18.886349 | orchestrator | 2026-01-30 04:58:18.886362 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-30 04:58:18.886379 | orchestrator | 2026-01-30 04:58:18.886397 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-30 04:58:18.886415 | orchestrator | Friday 30 January 2026 04:58:17 +0000 (0:00:18.559) 0:00:23.319 ******** 2026-01-30 04:58:18.886432 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:58:18.886447 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:58:18.886458 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:58:18.886468 | orchestrator | 2026-01-30 04:58:18.886479 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-30 04:58:18.886535 | orchestrator | Friday 30 January 2026 04:58:18 +0000 (0:00:00.332) 0:00:23.652 ******** 2026-01-30 04:58:18.886548 | orchestrator | skipping: [testbed-node-0] 2026-01-30 04:58:18.886559 | orchestrator | skipping: [testbed-node-1] 2026-01-30 04:58:18.886570 | orchestrator | skipping: [testbed-node-2] 2026-01-30 04:58:18.886580 | orchestrator | 2026-01-30 04:58:18.886591 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 04:58:18.886604 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 04:58:18.886616 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 04:58:18.886627 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 04:58:18.886637 | orchestrator | 2026-01-30 04:58:18.886648 | orchestrator | 2026-01-30 04:58:18.886659 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 04:58:18.886669 | orchestrator | Friday 30 January 2026 04:58:18 +0000 (0:00:00.366) 0:00:24.019 ******** 2026-01-30 04:58:18.886680 | orchestrator | =============================================================================== 2026-01-30 04:58:18.886691 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.56s 2026-01-30 04:58:18.886724 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.00s 2026-01-30 04:58:18.886736 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-01-30 04:58:18.886747 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.47s 2026-01-30 04:58:18.886758 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.37s 2026-01-30 04:58:18.886768 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.36s 2026-01-30 04:58:18.886779 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2026-01-30 04:58:18.886790 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-01-30 04:58:19.163139 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-01-30 04:58:19.169778 | orchestrator | + set -e 2026-01-30 04:58:19.169880 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 04:58:19.169906 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 04:58:19.169924 | orchestrator | ++ INTERACTIVE=false 2026-01-30 04:58:19.169936 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 04:58:19.169946 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 04:58:19.169958 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-30 04:58:19.170702 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-30 04:58:19.176624 | orchestrator | 2026-01-30 04:58:19.176694 | orchestrator | # OpenStack endpoints 2026-01-30 04:58:19.176714 | orchestrator | 2026-01-30 04:58:19.176731 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 04:58:19.176747 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 04:58:19.176763 | orchestrator | + export OS_CLOUD=admin 2026-01-30 04:58:19.176780 | orchestrator | + OS_CLOUD=admin 2026-01-30 04:58:19.176798 | orchestrator | + echo 2026-01-30 04:58:19.176815 | orchestrator | + echo '# OpenStack endpoints' 2026-01-30 04:58:19.176831 | orchestrator | + echo 2026-01-30 04:58:19.176847 | orchestrator | + openstack endpoint list 2026-01-30 04:58:22.241131 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-30 04:58:22.241250 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-01-30 04:58:22.241266 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-30 04:58:22.241278 | orchestrator | | 0e79c80826f349cf9510758e24bbdde5 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-01-30 04:58:22.241331 | orchestrator | | 14a054c5d13e4d7081313fa0d475db90 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-01-30 04:58:22.241343 | orchestrator | | 14f4478680a74c6695fca04174c703d1 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-30 04:58:22.241354 | orchestrator | | 23ff1a3578b740ff8483ac801c89402d | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-01-30 04:58:22.241380 | orchestrator | | 29b8046553294d65bc63db3f854e63c5 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-01-30 04:58:22.241392 | orchestrator | | 32507624368a433b81e251cdc43eebfb | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-01-30 04:58:22.241403 | orchestrator | | 3349170351fc4ec2a721fc1622a82d0a | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-01-30 04:58:22.241413 | orchestrator | | 3965940ed3024204838221133823893d | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-01-30 04:58:22.241424 | orchestrator | | 3d80c5ff7457473692b650ce2d93063d | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-01-30 04:58:22.241435 | orchestrator | | 53354be9bf9a4e70b468c1d84d056e40 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-01-30 04:58:22.241446 | orchestrator | | 5a261ef1321c4ccb976eb908cfb6a799 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-01-30 04:58:22.241457 | orchestrator | | 682d16c759f647c38b1d58b902b448e7 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-01-30 04:58:22.241468 | orchestrator | | 6a3a8a27dce045ff8b244c2ef5ce8194 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-01-30 04:58:22.241479 | orchestrator | | 6d5bdef2b1a1481496fe28c5ccf958bd | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-01-30 04:58:22.241557 | orchestrator | | 7095b7bb01314f44a967edb79e2d6f65 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-01-30 04:58:22.241568 | orchestrator | | 7ed51cdc2b104628bd99fcd05fb8120f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-01-30 04:58:22.241579 | orchestrator | | 7eed71831e3842799a5b3e6b388e623c | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-01-30 04:58:22.241590 | orchestrator | | 839f3eac367d49b3b721be7d2152a18d | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-01-30 04:58:22.241600 | orchestrator | | 84ee070c8f5f43e6bdc15f7a32132033 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-01-30 04:58:22.241611 | orchestrator | | 95e306fd9b374fbf8637711a7a71c422 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-01-30 04:58:22.241642 | orchestrator | | a8b7314460b24f448dffc3326047e47d | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-01-30 04:58:22.241664 | orchestrator | | b4c834a47dda48949b36429b6d128b65 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-01-30 04:58:22.241683 | orchestrator | | b72adedf735448e8ade4e8381ceccfb3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-30 04:58:22.241696 | orchestrator | | ba4e74c731a448eaad49dcdc1b09b223 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-01-30 04:58:22.241709 | orchestrator | | be9cedcf8b7f4ae38220abc51e5e7471 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-01-30 04:58:22.241721 | orchestrator | | c631a7ca7855493dad347ee850e5b9b6 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-01-30 04:58:22.241734 | orchestrator | | d8773238b49f477589ac70a6a9b96bec | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-01-30 04:58:22.241746 | orchestrator | | ea247026a55a418fbffa17233efc68ad | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-01-30 04:58:22.241758 | orchestrator | | ef5c163556b941228d09b28215ae6d71 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-01-30 04:58:22.241771 | orchestrator | | f16ca86d66f449968376daa139161182 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-01-30 04:58:22.241783 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-01-30 04:58:22.444428 | orchestrator | 2026-01-30 04:58:22.444616 | orchestrator | # Cinder 2026-01-30 04:58:22.444669 | orchestrator | 2026-01-30 04:58:22.444689 | orchestrator | + echo 2026-01-30 04:58:22.444709 | orchestrator | + echo '# Cinder' 2026-01-30 04:58:22.444729 | orchestrator | + echo 2026-01-30 04:58:22.444747 | orchestrator | + openstack volume service list 2026-01-30 04:58:24.955272 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:24.955389 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-01-30 04:58:24.955405 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:24.955417 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-30T04:58:17.000000 | 2026-01-30 04:58:24.955429 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-30T04:58:16.000000 | 2026-01-30 04:58:24.955440 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-30T04:58:17.000000 | 2026-01-30 04:58:24.955451 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-01-30T04:58:17.000000 | 2026-01-30 04:58:24.955462 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-01-30T04:58:22.000000 | 2026-01-30 04:58:24.955504 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-01-30T04:58:16.000000 | 2026-01-30 04:58:24.955517 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-01-30T04:58:23.000000 | 2026-01-30 04:58:24.955528 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-01-30T04:58:15.000000 | 2026-01-30 04:58:24.955540 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-01-30T04:58:16.000000 | 2026-01-30 04:58:24.955588 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:25.169956 | orchestrator | 2026-01-30 04:58:25.170072 | orchestrator | # Neutron 2026-01-30 04:58:25.170081 | orchestrator | 2026-01-30 04:58:25.170086 | orchestrator | + echo 2026-01-30 04:58:25.170090 | orchestrator | + echo '# Neutron' 2026-01-30 04:58:25.170096 | orchestrator | + echo 2026-01-30 04:58:25.170100 | orchestrator | + openstack network agent list 2026-01-30 04:58:27.666718 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-30 04:58:27.666811 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-01-30 04:58:27.666825 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-30 04:58:27.666836 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666846 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666855 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666865 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666892 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666903 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-01-30 04:58:27.666912 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-30 04:58:27.666922 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-30 04:58:27.666931 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-01-30 04:58:27.666941 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-01-30 04:58:27.873423 | orchestrator | + openstack network service provider list 2026-01-30 04:58:30.363612 | orchestrator | +---------------+------+---------+ 2026-01-30 04:58:30.363724 | orchestrator | | Service Type | Name | Default | 2026-01-30 04:58:30.363738 | orchestrator | +---------------+------+---------+ 2026-01-30 04:58:30.363748 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-01-30 04:58:30.363759 | orchestrator | +---------------+------+---------+ 2026-01-30 04:58:30.603177 | orchestrator | 2026-01-30 04:58:30.603301 | orchestrator | # Nova 2026-01-30 04:58:30.603327 | orchestrator | 2026-01-30 04:58:30.603345 | orchestrator | + echo 2026-01-30 04:58:30.603364 | orchestrator | + echo '# Nova' 2026-01-30 04:58:30.603384 | orchestrator | + echo 2026-01-30 04:58:30.603403 | orchestrator | + openstack compute service list 2026-01-30 04:58:33.399364 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:33.399508 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-01-30 04:58:33.399523 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:33.399537 | orchestrator | | 10e29577-5860-4b98-9629-66fed289bcd3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-01-30T04:58:29.000000 | 2026-01-30 04:58:33.399594 | orchestrator | | 6c9efc86-92df-4251-82cc-94dd7b1d2d09 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-01-30T04:58:23.000000 | 2026-01-30 04:58:33.399616 | orchestrator | | 40665b61-501b-40e0-b4ea-ca55252ef56c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-01-30T04:58:26.000000 | 2026-01-30 04:58:33.399632 | orchestrator | | 99f7009b-0a0f-4f11-b911-1d3085896e5e | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-01-30T04:58:28.000000 | 2026-01-30 04:58:33.399647 | orchestrator | | 1dfb7a54-adfb-4515-ad14-056ed9027cb3 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-01-30T04:58:29.000000 | 2026-01-30 04:58:33.399662 | orchestrator | | 2fcea660-d2cb-46f3-8b22-142c4847b7e4 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-01-30T04:58:29.000000 | 2026-01-30 04:58:33.399677 | orchestrator | | 82df4c68-7a5b-4c7c-8d95-a5c77e40ea8a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-01-30T04:58:26.000000 | 2026-01-30 04:58:33.399692 | orchestrator | | 437cbefa-35d3-44f2-97a7-750487cdfc15 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-01-30T04:58:26.000000 | 2026-01-30 04:58:33.399708 | orchestrator | | b5caa624-bed8-4d6b-b203-c115cd818ec1 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-01-30T04:58:28.000000 | 2026-01-30 04:58:33.399723 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-01-30 04:58:33.685775 | orchestrator | + openstack hypervisor list 2026-01-30 04:58:36.334775 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-30 04:58:36.334894 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-01-30 04:58:36.334903 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-30 04:58:36.334910 | orchestrator | | 3e9595f8-db10-4682-b692-72ab1ee7db28 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-01-30 04:58:36.334917 | orchestrator | | c27d4b85-3390-446b-b1b5-3a3efdc5aa38 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-01-30 04:58:36.334924 | orchestrator | | bc077101-1c61-4d18-8d31-5b248c1a55ce | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-01-30 04:58:36.334930 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-01-30 04:58:36.603153 | orchestrator | 2026-01-30 04:58:36.603263 | orchestrator | + echo 2026-01-30 04:58:36.605127 | orchestrator | # Run OpenStack test play 2026-01-30 04:58:36.605237 | orchestrator | 2026-01-30 04:58:36.605254 | orchestrator | + echo '# Run OpenStack test play' 2026-01-30 04:58:36.605268 | orchestrator | + echo 2026-01-30 04:58:36.605279 | orchestrator | + osism apply --environment openstack test 2026-01-30 04:58:38.569395 | orchestrator | 2026-01-30 04:58:38 | INFO  | Trying to run play test in environment openstack 2026-01-30 04:58:48.703623 | orchestrator | 2026-01-30 04:58:48 | INFO  | Task ea2ab323-f3e4-4b6e-9f01-1ff31c9622f6 (test) was prepared for execution. 2026-01-30 04:58:48.703764 | orchestrator | 2026-01-30 04:58:48 | INFO  | It takes a moment until task ea2ab323-f3e4-4b6e-9f01-1ff31c9622f6 (test) has been started and output is visible here. 2026-01-30 05:01:30.328837 | orchestrator | 2026-01-30 05:01:30.329004 | orchestrator | PLAY [Create test project] ***************************************************** 2026-01-30 05:01:30.329119 | orchestrator | 2026-01-30 05:01:30.329139 | orchestrator | TASK [Create test domain] ****************************************************** 2026-01-30 05:01:30.329153 | orchestrator | Friday 30 January 2026 04:58:52 +0000 (0:00:00.082) 0:00:00.082 ******** 2026-01-30 05:01:30.329164 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329175 | orchestrator | 2026-01-30 05:01:30.329185 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-01-30 05:01:30.329194 | orchestrator | Friday 30 January 2026 04:58:56 +0000 (0:00:03.350) 0:00:03.433 ******** 2026-01-30 05:01:30.329204 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329214 | orchestrator | 2026-01-30 05:01:30.329224 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-01-30 05:01:30.329281 | orchestrator | Friday 30 January 2026 04:59:00 +0000 (0:00:03.832) 0:00:07.266 ******** 2026-01-30 05:01:30.329293 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329303 | orchestrator | 2026-01-30 05:01:30.329312 | orchestrator | TASK [Create test project] ***************************************************** 2026-01-30 05:01:30.329322 | orchestrator | Friday 30 January 2026 04:59:06 +0000 (0:00:06.025) 0:00:13.292 ******** 2026-01-30 05:01:30.329333 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329345 | orchestrator | 2026-01-30 05:01:30.329357 | orchestrator | TASK [Create test user] ******************************************************** 2026-01-30 05:01:30.329369 | orchestrator | Friday 30 January 2026 04:59:10 +0000 (0:00:03.975) 0:00:17.268 ******** 2026-01-30 05:01:30.329380 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329392 | orchestrator | 2026-01-30 05:01:30.329403 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-01-30 05:01:30.329415 | orchestrator | Friday 30 January 2026 04:59:14 +0000 (0:00:04.006) 0:00:21.274 ******** 2026-01-30 05:01:30.329427 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-01-30 05:01:30.329440 | orchestrator | changed: [localhost] => (item=member) 2026-01-30 05:01:30.329453 | orchestrator | changed: [localhost] => (item=creator) 2026-01-30 05:01:30.329464 | orchestrator | 2026-01-30 05:01:30.329475 | orchestrator | TASK [Create test server group] ************************************************ 2026-01-30 05:01:30.329487 | orchestrator | Friday 30 January 2026 04:59:25 +0000 (0:00:11.197) 0:00:32.471 ******** 2026-01-30 05:01:30.329496 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329506 | orchestrator | 2026-01-30 05:01:30.329515 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-01-30 05:01:30.329525 | orchestrator | Friday 30 January 2026 04:59:29 +0000 (0:00:03.998) 0:00:36.469 ******** 2026-01-30 05:01:30.329535 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329544 | orchestrator | 2026-01-30 05:01:30.329553 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-01-30 05:01:30.329564 | orchestrator | Friday 30 January 2026 04:59:34 +0000 (0:00:04.900) 0:00:41.370 ******** 2026-01-30 05:01:30.329573 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329583 | orchestrator | 2026-01-30 05:01:30.329592 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-01-30 05:01:30.329602 | orchestrator | Friday 30 January 2026 04:59:38 +0000 (0:00:04.122) 0:00:45.492 ******** 2026-01-30 05:01:30.329611 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329621 | orchestrator | 2026-01-30 05:01:30.329630 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-01-30 05:01:30.329640 | orchestrator | Friday 30 January 2026 04:59:42 +0000 (0:00:03.884) 0:00:49.377 ******** 2026-01-30 05:01:30.329649 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329659 | orchestrator | 2026-01-30 05:01:30.329668 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-01-30 05:01:30.329678 | orchestrator | Friday 30 January 2026 04:59:46 +0000 (0:00:03.892) 0:00:53.269 ******** 2026-01-30 05:01:30.329687 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329697 | orchestrator | 2026-01-30 05:01:30.329706 | orchestrator | TASK [Create test network] ***************************************************** 2026-01-30 05:01:30.329716 | orchestrator | Friday 30 January 2026 04:59:50 +0000 (0:00:04.293) 0:00:57.562 ******** 2026-01-30 05:01:30.329725 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329735 | orchestrator | 2026-01-30 05:01:30.329746 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-01-30 05:01:30.329755 | orchestrator | Friday 30 January 2026 04:59:55 +0000 (0:00:04.745) 0:01:02.308 ******** 2026-01-30 05:01:30.329765 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329775 | orchestrator | 2026-01-30 05:01:30.329784 | orchestrator | TASK [Create test router] ****************************************************** 2026-01-30 05:01:30.329793 | orchestrator | Friday 30 January 2026 05:00:00 +0000 (0:00:05.813) 0:01:08.122 ******** 2026-01-30 05:01:30.329813 | orchestrator | changed: [localhost] 2026-01-30 05:01:30.329822 | orchestrator | 2026-01-30 05:01:30.329839 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-01-30 05:01:30.329856 | orchestrator | 2026-01-30 05:01:30.329872 | orchestrator | TASK [Get test server group] *************************************************** 2026-01-30 05:01:30.329888 | orchestrator | Friday 30 January 2026 05:00:12 +0000 (0:00:11.363) 0:01:19.485 ******** 2026-01-30 05:01:30.329906 | orchestrator | ok: [localhost] 2026-01-30 05:01:30.329923 | orchestrator | 2026-01-30 05:01:30.329941 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-01-30 05:01:30.329957 | orchestrator | Friday 30 January 2026 05:00:15 +0000 (0:00:03.369) 0:01:22.855 ******** 2026-01-30 05:01:30.329973 | orchestrator | skipping: [localhost] 2026-01-30 05:01:30.329989 | orchestrator | 2026-01-30 05:01:30.330005 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-01-30 05:01:30.330128 | orchestrator | Friday 30 January 2026 05:00:15 +0000 (0:00:00.060) 0:01:22.915 ******** 2026-01-30 05:01:30.330148 | orchestrator | skipping: [localhost] 2026-01-30 05:01:30.330166 | orchestrator | 2026-01-30 05:01:30.330182 | orchestrator | TASK [Delete test instances] *************************************************** 2026-01-30 05:01:30.330200 | orchestrator | Friday 30 January 2026 05:00:15 +0000 (0:00:00.058) 0:01:22.974 ******** 2026-01-30 05:01:30.330227 | orchestrator | skipping: [localhost] => (item=test-4)  2026-01-30 05:01:30.330238 | orchestrator | skipping: [localhost] => (item=test-3)  2026-01-30 05:01:30.330270 | orchestrator | skipping: [localhost] => (item=test-2)  2026-01-30 05:01:30.330281 | orchestrator | skipping: [localhost] => (item=test-1)  2026-01-30 05:01:30.330291 | orchestrator | skipping: [localhost] => (item=test)  2026-01-30 05:01:30.330301 | orchestrator | skipping: [localhost] 2026-01-30 05:01:30.330310 | orchestrator | 2026-01-30 05:01:30.330320 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-01-30 05:01:30.330329 | orchestrator | Friday 30 January 2026 05:00:15 +0000 (0:00:00.162) 0:01:23.137 ******** 2026-01-30 05:01:30.330339 | orchestrator | skipping: [localhost] 2026-01-30 05:01:30.330348 | orchestrator | 2026-01-30 05:01:30.330358 | orchestrator | TASK [Create test instances] *************************************************** 2026-01-30 05:01:30.330367 | orchestrator | Friday 30 January 2026 05:00:16 +0000 (0:00:00.141) 0:01:23.278 ******** 2026-01-30 05:01:30.330377 | orchestrator | changed: [localhost] => (item=test) 2026-01-30 05:01:30.330386 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-30 05:01:30.330396 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-30 05:01:30.330406 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-30 05:01:30.330415 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-30 05:01:30.330425 | orchestrator | 2026-01-30 05:01:30.330434 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-01-30 05:01:30.330444 | orchestrator | Friday 30 January 2026 05:00:20 +0000 (0:00:04.769) 0:01:28.048 ******** 2026-01-30 05:01:30.330453 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-01-30 05:01:30.330465 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-01-30 05:01:30.330474 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-01-30 05:01:30.330483 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-01-30 05:01:30.330493 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-01-30 05:01:30.330505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j227180053243.3653', 'results_file': '/ansible/.ansible_async/j227180053243.3653', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330518 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j659893926341.3678', 'results_file': '/ansible/.ansible_async/j659893926341.3678', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330540 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j494776726107.3703', 'results_file': '/ansible/.ansible_async/j494776726107.3703', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330550 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j680434593684.3728', 'results_file': '/ansible/.ansible_async/j680434593684.3728', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330560 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j277439091478.3753', 'results_file': '/ansible/.ansible_async/j277439091478.3753', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330570 | orchestrator | 2026-01-30 05:01:30.330580 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-01-30 05:01:30.330589 | orchestrator | Friday 30 January 2026 05:01:17 +0000 (0:00:56.849) 0:02:24.897 ******** 2026-01-30 05:01:30.330599 | orchestrator | changed: [localhost] => (item=test) 2026-01-30 05:01:30.330609 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-30 05:01:30.330618 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-30 05:01:30.330628 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-30 05:01:30.330637 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-30 05:01:30.330647 | orchestrator | 2026-01-30 05:01:30.330657 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-01-30 05:01:30.330666 | orchestrator | Friday 30 January 2026 05:01:21 +0000 (0:00:03.743) 0:02:28.641 ******** 2026-01-30 05:01:30.330676 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-01-30 05:01:30.330686 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j605797176587.3865', 'results_file': '/ansible/.ansible_async/j605797176587.3865', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330696 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j593131542633.3890', 'results_file': '/ansible/.ansible_async/j593131542633.3890', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330706 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j124039975518.3915', 'results_file': '/ansible/.ansible_async/j124039975518.3915', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:01:30.330730 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j439603878191.3940', 'results_file': '/ansible/.ansible_async/j439603878191.3940', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.905549 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j884628247986.3965', 'results_file': '/ansible/.ansible_async/j884628247986.3965', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.905664 | orchestrator | 2026-01-30 05:02:08.905681 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-01-30 05:02:08.905696 | orchestrator | Friday 30 January 2026 05:01:30 +0000 (0:00:08.900) 0:02:37.542 ******** 2026-01-30 05:02:08.905707 | orchestrator | changed: [localhost] => (item=test) 2026-01-30 05:02:08.905720 | orchestrator | changed: [localhost] => (item=test-1) 2026-01-30 05:02:08.905731 | orchestrator | changed: [localhost] => (item=test-2) 2026-01-30 05:02:08.905742 | orchestrator | changed: [localhost] => (item=test-3) 2026-01-30 05:02:08.905753 | orchestrator | changed: [localhost] => (item=test-4) 2026-01-30 05:02:08.905767 | orchestrator | 2026-01-30 05:02:08.905816 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-01-30 05:02:08.905845 | orchestrator | Friday 30 January 2026 05:01:34 +0000 (0:00:04.245) 0:02:41.788 ******** 2026-01-30 05:02:08.905865 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-01-30 05:02:08.905885 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j381110999681.4041', 'results_file': '/ansible/.ansible_async/j381110999681.4041', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.905904 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j642359076167.4066', 'results_file': '/ansible/.ansible_async/j642359076167.4066', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.905923 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j824914944280.4092', 'results_file': '/ansible/.ansible_async/j824914944280.4092', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.905942 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j878758348757.4118', 'results_file': '/ansible/.ansible_async/j878758348757.4118', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.906001 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j197635999695.4144', 'results_file': '/ansible/.ansible_async/j197635999695.4144', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-01-30 05:02:08.906086 | orchestrator | 2026-01-30 05:02:08.906103 | orchestrator | TASK [Create test volume] ****************************************************** 2026-01-30 05:02:08.906117 | orchestrator | Friday 30 January 2026 05:01:43 +0000 (0:00:09.350) 0:02:51.138 ******** 2026-01-30 05:02:08.906130 | orchestrator | changed: [localhost] 2026-01-30 05:02:08.906154 | orchestrator | 2026-01-30 05:02:08.906166 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-01-30 05:02:08.906179 | orchestrator | Friday 30 January 2026 05:01:50 +0000 (0:00:06.298) 0:02:57.437 ******** 2026-01-30 05:02:08.906192 | orchestrator | changed: [localhost] 2026-01-30 05:02:08.906205 | orchestrator | 2026-01-30 05:02:08.906219 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-01-30 05:02:08.906233 | orchestrator | Friday 30 January 2026 05:02:03 +0000 (0:00:13.501) 0:03:10.938 ******** 2026-01-30 05:02:08.906245 | orchestrator | ok: [localhost] 2026-01-30 05:02:08.906259 | orchestrator | 2026-01-30 05:02:08.906272 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-01-30 05:02:08.906285 | orchestrator | Friday 30 January 2026 05:02:08 +0000 (0:00:04.998) 0:03:15.936 ******** 2026-01-30 05:02:08.906298 | orchestrator | ok: [localhost] => { 2026-01-30 05:02:08.906312 | orchestrator |  "msg": "192.168.112.179" 2026-01-30 05:02:08.906327 | orchestrator | } 2026-01-30 05:02:08.906340 | orchestrator | 2026-01-30 05:02:08.906353 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:02:08.906367 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:02:08.906381 | orchestrator | 2026-01-30 05:02:08.906395 | orchestrator | 2026-01-30 05:02:08.906408 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:02:08.906422 | orchestrator | Friday 30 January 2026 05:02:08 +0000 (0:00:00.044) 0:03:15.981 ******** 2026-01-30 05:02:08.906435 | orchestrator | =============================================================================== 2026-01-30 05:02:08.906449 | orchestrator | Wait for instance creation to complete --------------------------------- 56.85s 2026-01-30 05:02:08.906462 | orchestrator | Attach test volume ----------------------------------------------------- 13.50s 2026-01-30 05:02:08.906473 | orchestrator | Create test router ----------------------------------------------------- 11.36s 2026-01-30 05:02:08.906511 | orchestrator | Add member roles to user test ------------------------------------------ 11.20s 2026-01-30 05:02:08.906522 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.35s 2026-01-30 05:02:08.906533 | orchestrator | Wait for metadata to be added ------------------------------------------- 8.90s 2026-01-30 05:02:08.906544 | orchestrator | Create test volume ------------------------------------------------------ 6.30s 2026-01-30 05:02:08.906575 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.03s 2026-01-30 05:02:08.906587 | orchestrator | Create test subnet ------------------------------------------------------ 5.81s 2026-01-30 05:02:08.906598 | orchestrator | Create floating ip address ---------------------------------------------- 5.00s 2026-01-30 05:02:08.906609 | orchestrator | Create ssh security group ----------------------------------------------- 4.90s 2026-01-30 05:02:08.906620 | orchestrator | Create test instances --------------------------------------------------- 4.77s 2026-01-30 05:02:08.906630 | orchestrator | Create test network ----------------------------------------------------- 4.75s 2026-01-30 05:02:08.906641 | orchestrator | Create test keypair ----------------------------------------------------- 4.29s 2026-01-30 05:02:08.906652 | orchestrator | Add tag to instances ---------------------------------------------------- 4.25s 2026-01-30 05:02:08.906663 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.12s 2026-01-30 05:02:08.906674 | orchestrator | Create test user -------------------------------------------------------- 4.01s 2026-01-30 05:02:08.906684 | orchestrator | Create test server group ------------------------------------------------ 4.00s 2026-01-30 05:02:08.906695 | orchestrator | Create test project ----------------------------------------------------- 3.98s 2026-01-30 05:02:08.906706 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.89s 2026-01-30 05:02:09.064655 | orchestrator | + server_list 2026-01-30 05:02:09.064722 | orchestrator | + openstack --os-cloud test server list 2026-01-30 05:02:12.999408 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-30 05:02:12.999499 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-01-30 05:02:12.999524 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-30 05:02:12.999544 | orchestrator | | 598d78c6-ff2d-4551-ab5a-cdf6cdf5aacf | test-3 | ACTIVE | test=192.168.112.138, 192.168.200.218 | N/A (booted from volume) | SCS-1L-1 | 2026-01-30 05:02:12.999553 | orchestrator | | f084758a-7e95-4427-91e5-85c152aef12d | test-4 | ACTIVE | test=192.168.112.181, 192.168.200.243 | N/A (booted from volume) | SCS-1L-1 | 2026-01-30 05:02:12.999562 | orchestrator | | b0dfcc81-25d4-43ee-8914-b78821fe8856 | test-2 | ACTIVE | test=192.168.112.180, 192.168.200.160 | N/A (booted from volume) | SCS-1L-1 | 2026-01-30 05:02:12.999570 | orchestrator | | ee3572bc-3d53-4a4e-bb76-b68eb615d907 | test-1 | ACTIVE | test=192.168.112.131, 192.168.200.226 | N/A (booted from volume) | SCS-1L-1 | 2026-01-30 05:02:12.999579 | orchestrator | | 5fe49167-1413-4362-a384-b8e84de60522 | test | ACTIVE | test=192.168.112.179, 192.168.200.120 | N/A (booted from volume) | SCS-1L-1 | 2026-01-30 05:02:12.999588 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-01-30 05:02:13.243254 | orchestrator | + openstack --os-cloud test server show test 2026-01-30 05:02:16.550614 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:16.550711 | orchestrator | | Field | Value | 2026-01-30 05:02:16.550741 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:16.550753 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-30 05:02:16.550760 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-30 05:02:16.550766 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-30 05:02:16.550772 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-01-30 05:02:16.550778 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-30 05:02:16.550784 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-30 05:02:16.550804 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-30 05:02:16.550811 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-30 05:02:16.550822 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-30 05:02:16.550828 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-30 05:02:16.550838 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-30 05:02:16.550844 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-30 05:02:16.550850 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-30 05:02:16.550857 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-30 05:02:16.550863 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-30 05:02:16.550870 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-30T05:00:54.000000 | 2026-01-30 05:02:16.550881 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-30 05:02:16.550896 | orchestrator | | accessIPv4 | | 2026-01-30 05:02:16.550904 | orchestrator | | accessIPv6 | | 2026-01-30 05:02:16.550911 | orchestrator | | addresses | test=192.168.112.179, 192.168.200.120 | 2026-01-30 05:02:16.550920 | orchestrator | | config_drive | | 2026-01-30 05:02:16.550927 | orchestrator | | created | 2026-01-30T05:00:25Z | 2026-01-30 05:02:16.550933 | orchestrator | | description | None | 2026-01-30 05:02:16.550963 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-30 05:02:16.550970 | orchestrator | | hostId | 6f84f82e46e5fe98b2db378d999b1ff7cac2b18489215d7a6d925388 | 2026-01-30 05:02:16.550976 | orchestrator | | host_status | None | 2026-01-30 05:02:16.551023 | orchestrator | | id | 5fe49167-1413-4362-a384-b8e84de60522 | 2026-01-30 05:02:16.551032 | orchestrator | | image | N/A (booted from volume) | 2026-01-30 05:02:16.551039 | orchestrator | | key_name | test | 2026-01-30 05:02:16.551045 | orchestrator | | locked | False | 2026-01-30 05:02:16.551052 | orchestrator | | locked_reason | None | 2026-01-30 05:02:16.551059 | orchestrator | | name | test | 2026-01-30 05:02:16.551065 | orchestrator | | pinned_availability_zone | None | 2026-01-30 05:02:16.551071 | orchestrator | | progress | 0 | 2026-01-30 05:02:16.551078 | orchestrator | | project_id | 2f98b0785f644a75aaf4f66293172a87 | 2026-01-30 05:02:16.551084 | orchestrator | | properties | hostname='test' | 2026-01-30 05:02:16.551108 | orchestrator | | security_groups | name='icmp' | 2026-01-30 05:02:16.551115 | orchestrator | | | name='ssh' | 2026-01-30 05:02:16.551122 | orchestrator | | server_groups | None | 2026-01-30 05:02:16.551128 | orchestrator | | status | ACTIVE | 2026-01-30 05:02:16.551140 | orchestrator | | tags | test | 2026-01-30 05:02:16.551148 | orchestrator | | trusted_image_certificates | None | 2026-01-30 05:02:16.551154 | orchestrator | | updated | 2026-01-30T05:01:23Z | 2026-01-30 05:02:16.551162 | orchestrator | | user_id | daace268a7e940e2a76ed5e498c2dab0 | 2026-01-30 05:02:16.551169 | orchestrator | | volumes_attached | delete_on_termination='True', id='67a3db05-6223-45ba-aa4d-0c8a1b26008f' | 2026-01-30 05:02:16.551184 | orchestrator | | | delete_on_termination='False', id='ac47bbd2-ca07-42a2-98ab-704c25e9439e' | 2026-01-30 05:02:16.554093 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:16.776363 | orchestrator | + openstack --os-cloud test server show test-1 2026-01-30 05:02:19.824611 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:19.824771 | orchestrator | | Field | Value | 2026-01-30 05:02:19.824821 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:19.824836 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-30 05:02:19.824848 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-30 05:02:19.824860 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-30 05:02:19.824872 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-01-30 05:02:19.824905 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-30 05:02:19.824917 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-30 05:02:19.824988 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-30 05:02:19.825003 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-30 05:02:19.825015 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-30 05:02:19.825032 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-30 05:02:19.825044 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-30 05:02:19.825055 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-30 05:02:19.825066 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-30 05:02:19.825086 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-30 05:02:19.825097 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-30 05:02:19.825111 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-30T05:00:54.000000 | 2026-01-30 05:02:19.825132 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-30 05:02:19.825146 | orchestrator | | accessIPv4 | | 2026-01-30 05:02:19.825159 | orchestrator | | accessIPv6 | | 2026-01-30 05:02:19.825178 | orchestrator | | addresses | test=192.168.112.131, 192.168.200.226 | 2026-01-30 05:02:19.825192 | orchestrator | | config_drive | | 2026-01-30 05:02:19.825205 | orchestrator | | created | 2026-01-30T05:00:26Z | 2026-01-30 05:02:19.825225 | orchestrator | | description | None | 2026-01-30 05:02:19.825238 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-30 05:02:19.825252 | orchestrator | | hostId | 6f84f82e46e5fe98b2db378d999b1ff7cac2b18489215d7a6d925388 | 2026-01-30 05:02:19.825266 | orchestrator | | host_status | None | 2026-01-30 05:02:19.825287 | orchestrator | | id | ee3572bc-3d53-4a4e-bb76-b68eb615d907 | 2026-01-30 05:02:19.825300 | orchestrator | | image | N/A (booted from volume) | 2026-01-30 05:02:19.825313 | orchestrator | | key_name | test | 2026-01-30 05:02:19.825332 | orchestrator | | locked | False | 2026-01-30 05:02:19.825345 | orchestrator | | locked_reason | None | 2026-01-30 05:02:19.825359 | orchestrator | | name | test-1 | 2026-01-30 05:02:19.825379 | orchestrator | | pinned_availability_zone | None | 2026-01-30 05:02:19.825391 | orchestrator | | progress | 0 | 2026-01-30 05:02:19.825402 | orchestrator | | project_id | 2f98b0785f644a75aaf4f66293172a87 | 2026-01-30 05:02:19.825413 | orchestrator | | properties | hostname='test-1' | 2026-01-30 05:02:19.825432 | orchestrator | | security_groups | name='icmp' | 2026-01-30 05:02:19.825444 | orchestrator | | | name='ssh' | 2026-01-30 05:02:19.825456 | orchestrator | | server_groups | None | 2026-01-30 05:02:19.825468 | orchestrator | | status | ACTIVE | 2026-01-30 05:02:19.825479 | orchestrator | | tags | test | 2026-01-30 05:02:19.825506 | orchestrator | | trusted_image_certificates | None | 2026-01-30 05:02:19.825534 | orchestrator | | updated | 2026-01-30T05:01:23Z | 2026-01-30 05:02:19.825555 | orchestrator | | user_id | daace268a7e940e2a76ed5e498c2dab0 | 2026-01-30 05:02:19.825574 | orchestrator | | volumes_attached | delete_on_termination='True', id='704b2f80-c286-4a12-95f8-eb413180b484' | 2026-01-30 05:02:19.828979 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:20.053033 | orchestrator | + openstack --os-cloud test server show test-2 2026-01-30 05:02:22.969260 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:22.969412 | orchestrator | | Field | Value | 2026-01-30 05:02:22.969455 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:22.969471 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-30 05:02:22.969509 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-30 05:02:22.969519 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-30 05:02:22.969529 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-01-30 05:02:22.969538 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-30 05:02:22.969547 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-30 05:02:22.969574 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-30 05:02:22.969584 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-30 05:02:22.969593 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-30 05:02:22.969602 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-30 05:02:22.969624 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-30 05:02:22.969634 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-30 05:02:22.969643 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-30 05:02:22.969652 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-30 05:02:22.969661 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-30 05:02:22.969670 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-30T05:00:52.000000 | 2026-01-30 05:02:22.969685 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-30 05:02:22.969694 | orchestrator | | accessIPv4 | | 2026-01-30 05:02:22.969703 | orchestrator | | accessIPv6 | | 2026-01-30 05:02:22.969716 | orchestrator | | addresses | test=192.168.112.180, 192.168.200.160 | 2026-01-30 05:02:22.969731 | orchestrator | | config_drive | | 2026-01-30 05:02:22.969740 | orchestrator | | created | 2026-01-30T05:00:26Z | 2026-01-30 05:02:22.969749 | orchestrator | | description | None | 2026-01-30 05:02:22.969758 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-30 05:02:22.969767 | orchestrator | | hostId | 6f84f82e46e5fe98b2db378d999b1ff7cac2b18489215d7a6d925388 | 2026-01-30 05:02:22.969777 | orchestrator | | host_status | None | 2026-01-30 05:02:22.969792 | orchestrator | | id | b0dfcc81-25d4-43ee-8914-b78821fe8856 | 2026-01-30 05:02:22.969804 | orchestrator | | image | N/A (booted from volume) | 2026-01-30 05:02:22.969815 | orchestrator | | key_name | test | 2026-01-30 05:02:22.969834 | orchestrator | | locked | False | 2026-01-30 05:02:22.969845 | orchestrator | | locked_reason | None | 2026-01-30 05:02:22.969856 | orchestrator | | name | test-2 | 2026-01-30 05:02:22.969866 | orchestrator | | pinned_availability_zone | None | 2026-01-30 05:02:22.969877 | orchestrator | | progress | 0 | 2026-01-30 05:02:22.969887 | orchestrator | | project_id | 2f98b0785f644a75aaf4f66293172a87 | 2026-01-30 05:02:22.969898 | orchestrator | | properties | hostname='test-2' | 2026-01-30 05:02:22.969914 | orchestrator | | security_groups | name='icmp' | 2026-01-30 05:02:22.969925 | orchestrator | | | name='ssh' | 2026-01-30 05:02:22.969971 | orchestrator | | server_groups | None | 2026-01-30 05:02:22.969987 | orchestrator | | status | ACTIVE | 2026-01-30 05:02:22.969997 | orchestrator | | tags | test | 2026-01-30 05:02:22.970008 | orchestrator | | trusted_image_certificates | None | 2026-01-30 05:02:22.970072 | orchestrator | | updated | 2026-01-30T05:01:24Z | 2026-01-30 05:02:22.970083 | orchestrator | | user_id | daace268a7e940e2a76ed5e498c2dab0 | 2026-01-30 05:02:22.970094 | orchestrator | | volumes_attached | delete_on_termination='True', id='77057e67-af70-4cac-b3af-93fd3c4a68fe' | 2026-01-30 05:02:22.972909 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:23.271830 | orchestrator | + openstack --os-cloud test server show test-3 2026-01-30 05:02:26.174752 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:26.174866 | orchestrator | | Field | Value | 2026-01-30 05:02:26.174876 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:26.174893 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-30 05:02:26.174898 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-30 05:02:26.174903 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-30 05:02:26.174908 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-01-30 05:02:26.174913 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-30 05:02:26.174918 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-30 05:02:26.174961 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-30 05:02:26.174971 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-30 05:02:26.174976 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-30 05:02:26.174981 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-30 05:02:26.174993 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-30 05:02:26.174998 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-30 05:02:26.175003 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-30 05:02:26.175007 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-30 05:02:26.175012 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-30 05:02:26.175017 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-30T05:00:54.000000 | 2026-01-30 05:02:26.175024 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-30 05:02:26.175032 | orchestrator | | accessIPv4 | | 2026-01-30 05:02:26.175037 | orchestrator | | accessIPv6 | | 2026-01-30 05:02:26.175041 | orchestrator | | addresses | test=192.168.112.138, 192.168.200.218 | 2026-01-30 05:02:26.175299 | orchestrator | | config_drive | | 2026-01-30 05:02:26.175306 | orchestrator | | created | 2026-01-30T05:00:27Z | 2026-01-30 05:02:26.175311 | orchestrator | | description | None | 2026-01-30 05:02:26.175316 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-30 05:02:26.175320 | orchestrator | | hostId | 6f84f82e46e5fe98b2db378d999b1ff7cac2b18489215d7a6d925388 | 2026-01-30 05:02:26.175325 | orchestrator | | host_status | None | 2026-01-30 05:02:26.175338 | orchestrator | | id | 598d78c6-ff2d-4551-ab5a-cdf6cdf5aacf | 2026-01-30 05:02:26.175345 | orchestrator | | image | N/A (booted from volume) | 2026-01-30 05:02:26.175350 | orchestrator | | key_name | test | 2026-01-30 05:02:26.175354 | orchestrator | | locked | False | 2026-01-30 05:02:26.175359 | orchestrator | | locked_reason | None | 2026-01-30 05:02:26.175363 | orchestrator | | name | test-3 | 2026-01-30 05:02:26.175368 | orchestrator | | pinned_availability_zone | None | 2026-01-30 05:02:26.175372 | orchestrator | | progress | 0 | 2026-01-30 05:02:26.175377 | orchestrator | | project_id | 2f98b0785f644a75aaf4f66293172a87 | 2026-01-30 05:02:26.175385 | orchestrator | | properties | hostname='test-3' | 2026-01-30 05:02:26.175393 | orchestrator | | security_groups | name='icmp' | 2026-01-30 05:02:26.175400 | orchestrator | | | name='ssh' | 2026-01-30 05:02:26.175405 | orchestrator | | server_groups | None | 2026-01-30 05:02:26.175410 | orchestrator | | status | ACTIVE | 2026-01-30 05:02:26.175414 | orchestrator | | tags | test | 2026-01-30 05:02:26.175419 | orchestrator | | trusted_image_certificates | None | 2026-01-30 05:02:26.175423 | orchestrator | | updated | 2026-01-30T05:01:24Z | 2026-01-30 05:02:26.175428 | orchestrator | | user_id | daace268a7e940e2a76ed5e498c2dab0 | 2026-01-30 05:02:26.175435 | orchestrator | | volumes_attached | delete_on_termination='True', id='b0492807-90cc-491b-a442-d36f8a18db09' | 2026-01-30 05:02:26.180517 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:26.450335 | orchestrator | + openstack --os-cloud test server show test-4 2026-01-30 05:02:29.615465 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:29.615576 | orchestrator | | Field | Value | 2026-01-30 05:02:29.615590 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:29.615601 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-01-30 05:02:29.615610 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-01-30 05:02:29.615618 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-01-30 05:02:29.615626 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-01-30 05:02:29.615652 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-01-30 05:02:29.615661 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-01-30 05:02:29.615686 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-01-30 05:02:29.615695 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-01-30 05:02:29.615720 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-01-30 05:02:29.615729 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-01-30 05:02:29.615738 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-01-30 05:02:29.615746 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-01-30 05:02:29.615755 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-01-30 05:02:29.615763 | orchestrator | | OS-EXT-STS:task_state | None | 2026-01-30 05:02:29.615781 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-01-30 05:02:29.615789 | orchestrator | | OS-SRV-USG:launched_at | 2026-01-30T05:00:58.000000 | 2026-01-30 05:02:29.615804 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-01-30 05:02:29.615817 | orchestrator | | accessIPv4 | | 2026-01-30 05:02:29.615825 | orchestrator | | accessIPv6 | | 2026-01-30 05:02:29.615834 | orchestrator | | addresses | test=192.168.112.181, 192.168.200.243 | 2026-01-30 05:02:29.615842 | orchestrator | | config_drive | | 2026-01-30 05:02:29.615850 | orchestrator | | created | 2026-01-30T05:00:27Z | 2026-01-30 05:02:29.615859 | orchestrator | | description | None | 2026-01-30 05:02:29.615872 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-01-30 05:02:29.615881 | orchestrator | | hostId | 6f84f82e46e5fe98b2db378d999b1ff7cac2b18489215d7a6d925388 | 2026-01-30 05:02:29.615889 | orchestrator | | host_status | None | 2026-01-30 05:02:29.615903 | orchestrator | | id | f084758a-7e95-4427-91e5-85c152aef12d | 2026-01-30 05:02:29.615916 | orchestrator | | image | N/A (booted from volume) | 2026-01-30 05:02:29.615995 | orchestrator | | key_name | test | 2026-01-30 05:02:29.616013 | orchestrator | | locked | False | 2026-01-30 05:02:29.616021 | orchestrator | | locked_reason | None | 2026-01-30 05:02:29.616030 | orchestrator | | name | test-4 | 2026-01-30 05:02:29.616043 | orchestrator | | pinned_availability_zone | None | 2026-01-30 05:02:29.616051 | orchestrator | | progress | 0 | 2026-01-30 05:02:29.616060 | orchestrator | | project_id | 2f98b0785f644a75aaf4f66293172a87 | 2026-01-30 05:02:29.616068 | orchestrator | | properties | hostname='test-4' | 2026-01-30 05:02:29.616083 | orchestrator | | security_groups | name='icmp' | 2026-01-30 05:02:29.616096 | orchestrator | | | name='ssh' | 2026-01-30 05:02:29.616105 | orchestrator | | server_groups | None | 2026-01-30 05:02:29.616113 | orchestrator | | status | ACTIVE | 2026-01-30 05:02:29.616122 | orchestrator | | tags | test | 2026-01-30 05:02:29.616135 | orchestrator | | trusted_image_certificates | None | 2026-01-30 05:02:29.616143 | orchestrator | | updated | 2026-01-30T05:01:25Z | 2026-01-30 05:02:29.616151 | orchestrator | | user_id | daace268a7e940e2a76ed5e498c2dab0 | 2026-01-30 05:02:29.616159 | orchestrator | | volumes_attached | delete_on_termination='True', id='505bb8bb-fb08-48a2-96fb-0f699afc343f' | 2026-01-30 05:02:29.621450 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-01-30 05:02:29.885281 | orchestrator | + server_ping 2026-01-30 05:02:29.887131 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-01-30 05:02:29.887186 | orchestrator | ++ tr -d '\r' 2026-01-30 05:02:32.813648 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-30 05:02:32.813813 | orchestrator | + ping -c3 192.168.112.138 2026-01-30 05:02:32.828229 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2026-01-30 05:02:32.828304 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=8.35 ms 2026-01-30 05:02:33.823860 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.37 ms 2026-01-30 05:02:34.825549 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.24 ms 2026-01-30 05:02:34.825666 | orchestrator | 2026-01-30 05:02:34.825681 | orchestrator | --- 192.168.112.138 ping statistics --- 2026-01-30 05:02:34.825691 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-30 05:02:34.825700 | orchestrator | rtt min/avg/max/mdev = 2.243/4.323/8.354/2.850 ms 2026-01-30 05:02:34.825940 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-30 05:02:34.825959 | orchestrator | + ping -c3 192.168.112.179 2026-01-30 05:02:34.838349 | orchestrator | PING 192.168.112.179 (192.168.112.179) 56(84) bytes of data. 2026-01-30 05:02:34.838431 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=1 ttl=63 time=7.07 ms 2026-01-30 05:02:35.834703 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=2 ttl=63 time=2.01 ms 2026-01-30 05:02:36.836087 | orchestrator | 64 bytes from 192.168.112.179: icmp_seq=3 ttl=63 time=1.93 ms 2026-01-30 05:02:36.836163 | orchestrator | 2026-01-30 05:02:36.836170 | orchestrator | --- 192.168.112.179 ping statistics --- 2026-01-30 05:02:36.836176 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-30 05:02:36.836181 | orchestrator | rtt min/avg/max/mdev = 1.929/3.668/7.070/2.405 ms 2026-01-30 05:02:36.837358 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-30 05:02:36.837391 | orchestrator | + ping -c3 192.168.112.181 2026-01-30 05:02:36.851298 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-01-30 05:02:36.851386 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=8.91 ms 2026-01-30 05:02:37.845996 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.05 ms 2026-01-30 05:02:38.847434 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.14 ms 2026-01-30 05:02:38.847553 | orchestrator | 2026-01-30 05:02:38.847572 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-01-30 05:02:38.847586 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-30 05:02:38.847682 | orchestrator | rtt min/avg/max/mdev = 2.046/4.366/8.914/3.215 ms 2026-01-30 05:02:38.847717 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-30 05:02:38.847730 | orchestrator | + ping -c3 192.168.112.180 2026-01-30 05:02:38.860747 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2026-01-30 05:02:38.860825 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=7.98 ms 2026-01-30 05:02:39.856369 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.68 ms 2026-01-30 05:02:40.858689 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=2.28 ms 2026-01-30 05:02:40.858820 | orchestrator | 2026-01-30 05:02:40.858841 | orchestrator | --- 192.168.112.180 ping statistics --- 2026-01-30 05:02:40.859028 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-01-30 05:02:40.859044 | orchestrator | rtt min/avg/max/mdev = 2.284/4.314/7.983/2.599 ms 2026-01-30 05:02:40.859070 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-01-30 05:02:40.859082 | orchestrator | + ping -c3 192.168.112.131 2026-01-30 05:02:40.870546 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-01-30 05:02:40.870651 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=6.91 ms 2026-01-30 05:02:41.866742 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.22 ms 2026-01-30 05:02:42.867869 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.30 ms 2026-01-30 05:02:42.868484 | orchestrator | 2026-01-30 05:02:42.868523 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-01-30 05:02:42.868532 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-01-30 05:02:42.868540 | orchestrator | rtt min/avg/max/mdev = 2.222/3.809/6.908/2.191 ms 2026-01-30 05:02:42.869252 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-30 05:02:43.352076 | orchestrator | ok: Runtime: 0:07:47.092220 2026-01-30 05:02:43.401168 | 2026-01-30 05:02:43.401316 | TASK [Run tempest] 2026-01-30 05:02:43.936747 | orchestrator | skipping: Conditional result was False 2026-01-30 05:02:43.954435 | 2026-01-30 05:02:43.954595 | TASK [Check prometheus alert status] 2026-01-30 05:02:44.490135 | orchestrator | skipping: Conditional result was False 2026-01-30 05:02:44.505041 | 2026-01-30 05:02:44.505171 | PLAY [Upgrade testbed] 2026-01-30 05:02:44.515887 | 2026-01-30 05:02:44.515996 | TASK [Print next ceph version] 2026-01-30 05:02:44.594294 | orchestrator | ok 2026-01-30 05:02:44.604769 | 2026-01-30 05:02:44.604895 | TASK [Print next openstack version] 2026-01-30 05:02:44.671680 | orchestrator | ok 2026-01-30 05:02:44.681985 | 2026-01-30 05:02:44.682119 | TASK [Print next manager version] 2026-01-30 05:02:44.752040 | orchestrator | ok 2026-01-30 05:02:44.762584 | 2026-01-30 05:02:44.762706 | TASK [Set cloud fact (Zuul deployment)] 2026-01-30 05:02:44.818508 | orchestrator | ok 2026-01-30 05:02:44.828804 | 2026-01-30 05:02:44.828923 | TASK [Set cloud fact (local deployment)] 2026-01-30 05:02:44.853164 | orchestrator | skipping: Conditional result was False 2026-01-30 05:02:44.865974 | 2026-01-30 05:02:44.866102 | TASK [Fetch manager address] 2026-01-30 05:02:45.137947 | orchestrator | ok 2026-01-30 05:02:45.148664 | 2026-01-30 05:02:45.148804 | TASK [Set manager_host address] 2026-01-30 05:02:45.228466 | orchestrator | ok 2026-01-30 05:02:45.240453 | 2026-01-30 05:02:45.240610 | TASK [Run upgrade] 2026-01-30 05:02:45.929280 | orchestrator | + set -e 2026-01-30 05:02:45.929543 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:02:45.929572 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:02:45.929593 | orchestrator | + CEPH_VERSION=reef 2026-01-30 05:02:45.929607 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-01-30 05:02:45.929619 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-01-30 05:02:45.929643 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-01-30 05:02:45.938554 | orchestrator | + set -e 2026-01-30 05:02:45.938622 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 05:02:45.939290 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 05:02:45.939381 | orchestrator | ++ INTERACTIVE=false 2026-01-30 05:02:45.939399 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 05:02:45.939420 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 05:02:45.940635 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-01-30 05:02:45.982478 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-01-30 05:02:45.982966 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-01-30 05:02:46.018095 | orchestrator | 2026-01-30 05:02:46.018184 | orchestrator | # UPGRADE MANAGER 2026-01-30 05:02:46.018199 | orchestrator | 2026-01-30 05:02:46.018208 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-01-30 05:02:46.018218 | orchestrator | + echo 2026-01-30 05:02:46.018226 | orchestrator | + echo '# UPGRADE MANAGER' 2026-01-30 05:02:46.018237 | orchestrator | + echo 2026-01-30 05:02:46.018245 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:02:46.018253 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:02:46.018261 | orchestrator | + CEPH_VERSION=reef 2026-01-30 05:02:46.018269 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-01-30 05:02:46.018277 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-01-30 05:02:46.018285 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-01-30 05:02:46.025416 | orchestrator | + set -e 2026-01-30 05:02:46.025532 | orchestrator | + VERSION=10.0.0-rc.1 2026-01-30 05:02:46.025551 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-01-30 05:02:46.031341 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-01-30 05:02:46.031404 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-30 05:02:46.035550 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-30 05:02:46.040451 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-30 05:02:46.048706 | orchestrator | /opt/configuration ~ 2026-01-30 05:02:46.048776 | orchestrator | + set -e 2026-01-30 05:02:46.048791 | orchestrator | + pushd /opt/configuration 2026-01-30 05:02:46.048803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 05:02:46.048815 | orchestrator | + source /opt/venv/bin/activate 2026-01-30 05:02:46.049874 | orchestrator | ++ deactivate nondestructive 2026-01-30 05:02:46.049930 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:46.049938 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:46.049945 | orchestrator | ++ hash -r 2026-01-30 05:02:46.049951 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:46.049966 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-30 05:02:46.049972 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-30 05:02:46.049979 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-30 05:02:46.050079 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-30 05:02:46.050094 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-30 05:02:46.050105 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-30 05:02:46.050117 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-30 05:02:46.050127 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:46.050139 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:46.050149 | orchestrator | ++ export PATH 2026-01-30 05:02:46.050159 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:46.050170 | orchestrator | ++ '[' -z '' ']' 2026-01-30 05:02:46.050180 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-30 05:02:46.050223 | orchestrator | ++ PS1='(venv) ' 2026-01-30 05:02:46.050375 | orchestrator | ++ export PS1 2026-01-30 05:02:46.050390 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-30 05:02:46.050401 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-30 05:02:46.050411 | orchestrator | ++ hash -r 2026-01-30 05:02:46.050426 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-30 05:02:46.973845 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-30 05:02:46.974811 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-30 05:02:46.976138 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-30 05:02:46.977408 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-30 05:02:46.978591 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-01-30 05:02:46.988564 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-30 05:02:46.990038 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-30 05:02:46.991065 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-30 05:02:46.992345 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-30 05:02:47.022628 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-30 05:02:47.024014 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-30 05:02:47.025571 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-01-30 05:02:47.027014 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-30 05:02:47.030789 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-30 05:02:47.243230 | orchestrator | ++ which gilt 2026-01-30 05:02:47.246658 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-30 05:02:47.246711 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-30 05:02:47.444025 | orchestrator | osism.cfg-generics: 2026-01-30 05:02:47.535210 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-30 05:02:47.535856 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-30 05:02:47.536881 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-30 05:02:47.536956 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-30 05:02:48.398723 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-30 05:02:48.407532 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-30 05:02:48.726446 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-30 05:02:48.793425 | orchestrator | ~ 2026-01-30 05:02:48.793520 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 05:02:48.793528 | orchestrator | + deactivate 2026-01-30 05:02:48.793533 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-30 05:02:48.793539 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:48.793543 | orchestrator | + export PATH 2026-01-30 05:02:48.793548 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-30 05:02:48.793552 | orchestrator | + '[' -n '' ']' 2026-01-30 05:02:48.793556 | orchestrator | + hash -r 2026-01-30 05:02:48.793560 | orchestrator | + '[' -n '' ']' 2026-01-30 05:02:48.793564 | orchestrator | + unset VIRTUAL_ENV 2026-01-30 05:02:48.793568 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-30 05:02:48.793572 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-30 05:02:48.793575 | orchestrator | + unset -f deactivate 2026-01-30 05:02:48.793579 | orchestrator | + popd 2026-01-30 05:02:48.795583 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-01-30 05:02:48.795613 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-30 05:02:48.800928 | orchestrator | + set -e 2026-01-30 05:02:48.801001 | orchestrator | + NAMESPACE=kolla/release 2026-01-30 05:02:48.801020 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-30 05:02:48.809116 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-30 05:02:48.815257 | orchestrator | /opt/configuration ~ 2026-01-30 05:02:48.815321 | orchestrator | + set -e 2026-01-30 05:02:48.815329 | orchestrator | + pushd /opt/configuration 2026-01-30 05:02:48.815336 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 05:02:48.815342 | orchestrator | + source /opt/venv/bin/activate 2026-01-30 05:02:48.815348 | orchestrator | ++ deactivate nondestructive 2026-01-30 05:02:48.815353 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:48.815359 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:48.815374 | orchestrator | ++ hash -r 2026-01-30 05:02:48.815380 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:48.815387 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-30 05:02:48.815394 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-30 05:02:48.815400 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-30 05:02:48.815407 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-30 05:02:48.815414 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-30 05:02:48.815421 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-30 05:02:48.815431 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-30 05:02:48.815437 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:48.815447 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:48.815451 | orchestrator | ++ export PATH 2026-01-30 05:02:48.815632 | orchestrator | ++ '[' -n '' ']' 2026-01-30 05:02:48.815639 | orchestrator | ++ '[' -z '' ']' 2026-01-30 05:02:48.815644 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-30 05:02:48.815648 | orchestrator | ++ PS1='(venv) ' 2026-01-30 05:02:48.815652 | orchestrator | ++ export PS1 2026-01-30 05:02:48.815656 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-30 05:02:48.815661 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-30 05:02:48.815683 | orchestrator | ++ hash -r 2026-01-30 05:02:48.815689 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-30 05:02:49.308976 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-30 05:02:49.310181 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-30 05:02:49.311391 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-30 05:02:49.312678 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-30 05:02:49.313826 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-01-30 05:02:49.323714 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-30 05:02:49.325175 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-30 05:02:49.326085 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-30 05:02:49.327421 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-30 05:02:49.356796 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-30 05:02:49.358137 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-30 05:02:49.359849 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-01-30 05:02:49.361184 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-30 05:02:49.365115 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-30 05:02:49.560344 | orchestrator | ++ which gilt 2026-01-30 05:02:49.563088 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-30 05:02:49.563184 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-30 05:02:49.739347 | orchestrator | osism.cfg-generics: 2026-01-30 05:02:49.822996 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-30 05:02:49.823100 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-30 05:02:49.823288 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-30 05:02:49.823381 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-30 05:02:50.509292 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-30 05:02:50.521002 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-30 05:02:50.903021 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-30 05:02:50.959264 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-30 05:02:50.959341 | orchestrator | + deactivate 2026-01-30 05:02:50.959367 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-30 05:02:50.959376 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-30 05:02:50.959381 | orchestrator | + export PATH 2026-01-30 05:02:50.959386 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-30 05:02:50.959392 | orchestrator | + '[' -n '' ']' 2026-01-30 05:02:50.959397 | orchestrator | + hash -r 2026-01-30 05:02:50.959402 | orchestrator | + '[' -n '' ']' 2026-01-30 05:02:50.959407 | orchestrator | + unset VIRTUAL_ENV 2026-01-30 05:02:50.959413 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-30 05:02:50.959418 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-30 05:02:50.959423 | orchestrator | + unset -f deactivate 2026-01-30 05:02:50.959428 | orchestrator | ~ 2026-01-30 05:02:50.959434 | orchestrator | + popd 2026-01-30 05:02:50.961619 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-01-30 05:02:51.027100 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-30 05:02:51.027954 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-01-30 05:02:51.130090 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 05:02:51.130176 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-30 05:02:51.134711 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-01-30 05:02:51.140390 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-01-30 05:02:51.201916 | orchestrator | ++ '[' -1 -le 0 ']' 2026-01-30 05:02:51.202555 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-01-30 05:02:51.294069 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-01-30 05:02:51.294193 | orchestrator | ++ echo true 2026-01-30 05:02:51.294708 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-01-30 05:02:51.296388 | orchestrator | +++ semver 2024.2 2024.2 2026-01-30 05:02:51.377841 | orchestrator | ++ '[' 0 -le 0 ']' 2026-01-30 05:02:51.378713 | orchestrator | +++ semver 2024.2 2025.1 2026-01-30 05:02:51.443340 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-01-30 05:02:51.443435 | orchestrator | ++ echo false 2026-01-30 05:02:51.443455 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-01-30 05:02:51.443664 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-30 05:02:51.443683 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-01-30 05:02:51.443778 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-01-30 05:02:51.444011 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-01-30 05:02:51.450741 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-01-30 05:02:51.450835 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-01-30 05:02:51.470459 | orchestrator | export RABBITMQ3TO4=true 2026-01-30 05:02:51.474591 | orchestrator | + osism update manager 2026-01-30 05:02:56.939736 | orchestrator | Collecting uv 2026-01-30 05:02:57.053722 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-01-30 05:02:57.075130 | orchestrator | Downloading uv-0.9.28-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.7 MB) 2026-01-30 05:02:57.839802 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.7/22.7 MB 35.6 MB/s eta 0:00:00 2026-01-30 05:02:57.900343 | orchestrator | Installing collected packages: uv 2026-01-30 05:02:58.386758 | orchestrator | Successfully installed uv-0.9.28 2026-01-30 05:02:59.070135 | orchestrator | Resolved 11 packages in 397ms 2026-01-30 05:02:59.108256 | orchestrator | Downloading cryptography (4.2MiB) 2026-01-30 05:02:59.108648 | orchestrator | Downloading netaddr (2.2MiB) 2026-01-30 05:02:59.108676 | orchestrator | Downloading ansible-core (2.1MiB) 2026-01-30 05:02:59.322816 | orchestrator | Downloading ansible (54.5MiB) 2026-01-30 05:02:59.511371 | orchestrator | Downloaded netaddr 2026-01-30 05:02:59.567057 | orchestrator | Downloaded cryptography 2026-01-30 05:02:59.608117 | orchestrator | Downloaded ansible-core 2026-01-30 05:03:06.687598 | orchestrator | Downloaded ansible 2026-01-30 05:03:06.687682 | orchestrator | Prepared 11 packages in 7.61s 2026-01-30 05:03:07.226529 | orchestrator | Installed 11 packages in 537ms 2026-01-30 05:03:07.226667 | orchestrator | + ansible==11.11.0 2026-01-30 05:03:07.227486 | orchestrator | + ansible-core==2.18.13 2026-01-30 05:03:07.227549 | orchestrator | + cffi==2.0.0 2026-01-30 05:03:07.227564 | orchestrator | + cryptography==46.0.4 2026-01-30 05:03:07.227575 | orchestrator | + jinja2==3.1.6 2026-01-30 05:03:07.227585 | orchestrator | + markupsafe==3.0.3 2026-01-30 05:03:07.227595 | orchestrator | + netaddr==1.3.0 2026-01-30 05:03:07.227624 | orchestrator | + packaging==26.0 2026-01-30 05:03:07.227634 | orchestrator | + pycparser==3.0 2026-01-30 05:03:07.227644 | orchestrator | + pyyaml==6.0.3 2026-01-30 05:03:07.227654 | orchestrator | + resolvelib==1.0.1 2026-01-30 05:03:08.319170 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-195452w6tzhci6/tmpgh9532q6/ansible-collection-services5jxdmke1'... 2026-01-30 05:03:09.660259 | orchestrator | Your branch is up to date with 'origin/main'. 2026-01-30 05:03:09.660383 | orchestrator | Already on 'main' 2026-01-30 05:03:10.125740 | orchestrator | Starting galaxy collection install process 2026-01-30 05:03:10.125838 | orchestrator | Process install dependency map 2026-01-30 05:03:10.125848 | orchestrator | Starting collection install process 2026-01-30 05:03:10.125895 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-01-30 05:03:10.125906 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-01-30 05:03:10.125913 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-30 05:03:10.643497 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-195500fbgaa1b5/tmpksq5sbto/ansible-playbooks-managerv_o6un2e'... 2026-01-30 05:03:11.261200 | orchestrator | Your branch is up to date with 'origin/main'. 2026-01-30 05:03:11.261300 | orchestrator | Already on 'main' 2026-01-30 05:03:11.515548 | orchestrator | Starting galaxy collection install process 2026-01-30 05:03:11.515628 | orchestrator | Process install dependency map 2026-01-30 05:03:11.515638 | orchestrator | Starting collection install process 2026-01-30 05:03:11.515645 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-01-30 05:03:11.515654 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-01-30 05:03:11.515661 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-01-30 05:03:12.137132 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-01-30 05:03:12.137230 | orchestrator | -vvvv to see details 2026-01-30 05:03:12.519993 | orchestrator | 2026-01-30 05:03:12.520129 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-01-30 05:03:12.520156 | orchestrator | 2026-01-30 05:03:12.520173 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-30 05:03:16.327021 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:16.327117 | orchestrator | 2026-01-30 05:03:16.327131 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-30 05:03:16.391607 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 05:03:16.391729 | orchestrator | 2026-01-30 05:03:16.391782 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-30 05:03:18.145911 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:18.146054 | orchestrator | 2026-01-30 05:03:18.146065 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-30 05:03:18.202338 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:18.202443 | orchestrator | 2026-01-30 05:03:18.202459 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-30 05:03:18.272658 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-30 05:03:18.272757 | orchestrator | 2026-01-30 05:03:18.272771 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-30 05:03:22.417342 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-01-30 05:03:22.417416 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-01-30 05:03:22.417424 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-30 05:03:22.417438 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-01-30 05:03:22.417443 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-30 05:03:22.417448 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-30 05:03:22.417453 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-30 05:03:22.417459 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-01-30 05:03:22.417464 | orchestrator | 2026-01-30 05:03:22.417470 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-30 05:03:23.482633 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:23.482829 | orchestrator | 2026-01-30 05:03:23.482906 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-30 05:03:24.388464 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:24.388580 | orchestrator | 2026-01-30 05:03:24.388602 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-30 05:03:24.480714 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-30 05:03:24.480791 | orchestrator | 2026-01-30 05:03:24.480800 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-30 05:03:26.125488 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-01-30 05:03:26.125609 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-01-30 05:03:26.125630 | orchestrator | 2026-01-30 05:03:26.125646 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-30 05:03:27.057459 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:27.057575 | orchestrator | 2026-01-30 05:03:27.057598 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-30 05:03:27.123189 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:03:27.123279 | orchestrator | 2026-01-30 05:03:27.123293 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-30 05:03:27.203295 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-30 05:03:27.203399 | orchestrator | 2026-01-30 05:03:27.203415 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-30 05:03:28.125470 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:28.125576 | orchestrator | 2026-01-30 05:03:28.125593 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-30 05:03:28.181616 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-30 05:03:28.181694 | orchestrator | 2026-01-30 05:03:28.181703 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-30 05:03:30.057245 | orchestrator | ok: [testbed-manager] => (item=None) 2026-01-30 05:03:30.057331 | orchestrator | ok: [testbed-manager] => (item=None) 2026-01-30 05:03:30.057340 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:30.057347 | orchestrator | 2026-01-30 05:03:30.057353 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-30 05:03:30.914996 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:30.915090 | orchestrator | 2026-01-30 05:03:30.915105 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-30 05:03:30.976303 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:03:30.976428 | orchestrator | 2026-01-30 05:03:30.976456 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-30 05:03:31.069071 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-30 05:03:31.069162 | orchestrator | 2026-01-30 05:03:31.069173 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-30 05:03:31.717401 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:31.717506 | orchestrator | 2026-01-30 05:03:31.717523 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-30 05:03:32.171585 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:32.171705 | orchestrator | 2026-01-30 05:03:32.171735 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-30 05:03:33.928521 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-01-30 05:03:33.928652 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-01-30 05:03:33.928680 | orchestrator | 2026-01-30 05:03:33.928703 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-30 05:03:35.085927 | orchestrator | changed: [testbed-manager] 2026-01-30 05:03:35.086010 | orchestrator | 2026-01-30 05:03:35.086047 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-30 05:03:35.613595 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:35.613687 | orchestrator | 2026-01-30 05:03:35.613700 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-30 05:03:36.135919 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:36.135993 | orchestrator | 2026-01-30 05:03:36.136020 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-30 05:03:36.192537 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:03:36.192619 | orchestrator | 2026-01-30 05:03:36.192629 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-30 05:03:36.256633 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-30 05:03:36.256739 | orchestrator | 2026-01-30 05:03:36.256748 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-30 05:03:36.308245 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:36.308321 | orchestrator | 2026-01-30 05:03:36.308329 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-30 05:03:38.980066 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-01-30 05:03:38.980174 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-01-30 05:03:38.980183 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-01-30 05:03:38.980188 | orchestrator | 2026-01-30 05:03:38.980193 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-30 05:03:39.978677 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:39.978763 | orchestrator | 2026-01-30 05:03:39.978772 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-30 05:03:41.006181 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:41.006268 | orchestrator | 2026-01-30 05:03:41.006277 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-30 05:03:42.012885 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:42.012970 | orchestrator | 2026-01-30 05:03:42.012978 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-30 05:03:42.093028 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-30 05:03:42.093107 | orchestrator | 2026-01-30 05:03:42.093117 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-30 05:03:42.154347 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:42.154456 | orchestrator | 2026-01-30 05:03:42.154474 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-30 05:03:43.155957 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-01-30 05:03:43.156040 | orchestrator | 2026-01-30 05:03:43.156051 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-30 05:03:43.236895 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-30 05:03:43.237025 | orchestrator | 2026-01-30 05:03:43.237053 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-30 05:03:44.207317 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:44.207414 | orchestrator | 2026-01-30 05:03:44.207427 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-30 05:03:45.188457 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:45.188552 | orchestrator | 2026-01-30 05:03:45.188566 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-30 05:03:45.261194 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:03:45.261272 | orchestrator | 2026-01-30 05:03:45.261281 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-30 05:03:45.327935 | orchestrator | ok: [testbed-manager] 2026-01-30 05:03:45.328021 | orchestrator | 2026-01-30 05:03:45.328035 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-30 05:03:46.596417 | orchestrator | changed: [testbed-manager] 2026-01-30 05:03:46.596541 | orchestrator | 2026-01-30 05:03:46.596560 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-30 05:04:49.088398 | orchestrator | changed: [testbed-manager] 2026-01-30 05:04:49.088482 | orchestrator | 2026-01-30 05:04:49.088489 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-30 05:04:50.165942 | orchestrator | ok: [testbed-manager] 2026-01-30 05:04:50.166083 | orchestrator | 2026-01-30 05:04:50.166097 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-30 05:04:50.230357 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:04:50.230484 | orchestrator | 2026-01-30 05:04:50.230513 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-30 05:04:51.041132 | orchestrator | ok: [testbed-manager] 2026-01-30 05:04:51.041220 | orchestrator | 2026-01-30 05:04:51.041232 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-30 05:04:51.113100 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:04:51.113197 | orchestrator | 2026-01-30 05:04:51.113211 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-30 05:04:51.113223 | orchestrator | 2026-01-30 05:04:51.113234 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-30 05:05:05.984499 | orchestrator | changed: [testbed-manager] 2026-01-30 05:05:05.984620 | orchestrator | 2026-01-30 05:05:05.984639 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-30 05:06:06.053101 | orchestrator | Pausing for 60 seconds 2026-01-30 05:06:06.053248 | orchestrator | changed: [testbed-manager] 2026-01-30 05:06:06.053279 | orchestrator | 2026-01-30 05:06:06.053300 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-01-30 05:06:06.101807 | orchestrator | ok: [testbed-manager] 2026-01-30 05:06:06.101887 | orchestrator | 2026-01-30 05:06:06.101902 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-30 05:06:09.637511 | orchestrator | changed: [testbed-manager] 2026-01-30 05:06:09.637615 | orchestrator | 2026-01-30 05:06:09.637633 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-30 05:07:12.241254 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-30 05:07:12.241372 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-30 05:07:12.241384 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-30 05:07:12.241392 | orchestrator | changed: [testbed-manager] 2026-01-30 05:07:12.241402 | orchestrator | 2026-01-30 05:07:12.241410 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-30 05:07:22.588521 | orchestrator | changed: [testbed-manager] 2026-01-30 05:07:22.588693 | orchestrator | 2026-01-30 05:07:22.588708 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-30 05:07:22.682381 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-30 05:07:22.682504 | orchestrator | 2026-01-30 05:07:22.682519 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-30 05:07:22.682529 | orchestrator | 2026-01-30 05:07:22.682536 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-30 05:07:22.742845 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:07:22.742914 | orchestrator | 2026-01-30 05:07:22.742921 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-30 05:07:22.834627 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-30 05:07:22.834716 | orchestrator | 2026-01-30 05:07:22.834755 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-30 05:07:23.834228 | orchestrator | changed: [testbed-manager] 2026-01-30 05:07:23.834315 | orchestrator | 2026-01-30 05:07:23.834325 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-30 05:07:27.320351 | orchestrator | ok: [testbed-manager] 2026-01-30 05:07:27.320428 | orchestrator | 2026-01-30 05:07:27.320437 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-30 05:07:27.404508 | orchestrator | ok: [testbed-manager] => { 2026-01-30 05:07:27.404670 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-30 05:07:27.404689 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-30 05:07:27.404701 | orchestrator | "Checking running containers against expected versions...", 2026-01-30 05:07:27.404714 | orchestrator | "", 2026-01-30 05:07:27.404725 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-30 05:07:27.404736 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-01-30 05:07:27.404748 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.404759 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-01-30 05:07:27.404777 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.404796 | orchestrator | "", 2026-01-30 05:07:27.404816 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-30 05:07:27.404834 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-01-30 05:07:27.404849 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.404861 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-01-30 05:07:27.404872 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.404882 | orchestrator | "", 2026-01-30 05:07:27.404893 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-30 05:07:27.404904 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-01-30 05:07:27.404915 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.404926 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-01-30 05:07:27.404936 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.404947 | orchestrator | "", 2026-01-30 05:07:27.404957 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-30 05:07:27.404968 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-01-30 05:07:27.404979 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.404990 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-01-30 05:07:27.405000 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405013 | orchestrator | "", 2026-01-30 05:07:27.405026 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-30 05:07:27.405040 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-01-30 05:07:27.405053 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405066 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-01-30 05:07:27.405078 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405091 | orchestrator | "", 2026-01-30 05:07:27.405103 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-30 05:07:27.405136 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405150 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405163 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405175 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405187 | orchestrator | "", 2026-01-30 05:07:27.405199 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-30 05:07:27.405211 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-30 05:07:27.405229 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405248 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-30 05:07:27.405267 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405287 | orchestrator | "", 2026-01-30 05:07:27.405306 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-30 05:07:27.405326 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-30 05:07:27.405344 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405376 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-30 05:07:27.405395 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405412 | orchestrator | "", 2026-01-30 05:07:27.405424 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-30 05:07:27.405435 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-01-30 05:07:27.405445 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405456 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-01-30 05:07:27.405467 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405477 | orchestrator | "", 2026-01-30 05:07:27.405492 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-30 05:07:27.405503 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-30 05:07:27.405514 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405525 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-30 05:07:27.405536 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405546 | orchestrator | "", 2026-01-30 05:07:27.405557 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-30 05:07:27.405595 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405606 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405617 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405627 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405638 | orchestrator | "", 2026-01-30 05:07:27.405649 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-30 05:07:27.405659 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405670 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405681 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405691 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405702 | orchestrator | "", 2026-01-30 05:07:27.405713 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-30 05:07:27.405723 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405734 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405744 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405755 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405766 | orchestrator | "", 2026-01-30 05:07:27.405777 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-30 05:07:27.405787 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405798 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405809 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405840 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405852 | orchestrator | "", 2026-01-30 05:07:27.405863 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-30 05:07:27.405874 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405894 | orchestrator | " Enabled: true", 2026-01-30 05:07:27.405904 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-01-30 05:07:27.405915 | orchestrator | " Status: ✅ MATCH", 2026-01-30 05:07:27.405926 | orchestrator | "", 2026-01-30 05:07:27.405937 | orchestrator | "=== Summary ===", 2026-01-30 05:07:27.405948 | orchestrator | "Errors (version mismatches): 0", 2026-01-30 05:07:27.405959 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-30 05:07:27.405969 | orchestrator | "", 2026-01-30 05:07:27.405980 | orchestrator | "✅ All running containers match expected versions!" 2026-01-30 05:07:27.405991 | orchestrator | ] 2026-01-30 05:07:27.406002 | orchestrator | } 2026-01-30 05:07:27.406014 | orchestrator | 2026-01-30 05:07:27.406088 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-30 05:07:27.474376 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:07:27.474468 | orchestrator | 2026-01-30 05:07:27.474482 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:07:27.474496 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-01-30 05:07:27.474508 | orchestrator | 2026-01-30 05:07:39.971794 | orchestrator | 2026-01-30 05:07:39 | INFO  | Task 43b0d967-f3f9-4801-9894-00bcc02d1a85 (sync inventory) is running in background. Output coming soon. 2026-01-30 05:08:06.239111 | orchestrator | 2026-01-30 05:07:41 | INFO  | Starting group_vars file reorganization 2026-01-30 05:08:06.239214 | orchestrator | 2026-01-30 05:07:41 | INFO  | Moved 0 file(s) to their respective directories 2026-01-30 05:08:06.239230 | orchestrator | 2026-01-30 05:07:41 | INFO  | Group_vars file reorganization completed 2026-01-30 05:08:06.239260 | orchestrator | 2026-01-30 05:07:44 | INFO  | Starting variable preparation from inventory 2026-01-30 05:08:06.239271 | orchestrator | 2026-01-30 05:07:47 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-30 05:08:06.239281 | orchestrator | 2026-01-30 05:07:47 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-30 05:08:06.239289 | orchestrator | 2026-01-30 05:07:47 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-30 05:08:06.239299 | orchestrator | 2026-01-30 05:07:47 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-30 05:08:06.239308 | orchestrator | 2026-01-30 05:07:47 | INFO  | Variable preparation completed 2026-01-30 05:08:06.239318 | orchestrator | 2026-01-30 05:07:48 | INFO  | Starting inventory overwrite handling 2026-01-30 05:08:06.239328 | orchestrator | 2026-01-30 05:07:48 | INFO  | Handling group overwrites in 99-overwrite 2026-01-30 05:08:06.239338 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removing group frr:children from 60-generic 2026-01-30 05:08:06.239348 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-30 05:08:06.239358 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-30 05:08:06.239369 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-30 05:08:06.239379 | orchestrator | 2026-01-30 05:07:48 | INFO  | Handling group overwrites in 20-roles 2026-01-30 05:08:06.239389 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-30 05:08:06.239398 | orchestrator | 2026-01-30 05:07:48 | INFO  | Removed 5 group(s) in total 2026-01-30 05:08:06.239408 | orchestrator | 2026-01-30 05:07:48 | INFO  | Inventory overwrite handling completed 2026-01-30 05:08:06.239418 | orchestrator | 2026-01-30 05:07:49 | INFO  | Starting merge of inventory files 2026-01-30 05:08:06.239428 | orchestrator | 2026-01-30 05:07:49 | INFO  | Inventory files merged successfully 2026-01-30 05:08:06.239462 | orchestrator | 2026-01-30 05:07:54 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-30 05:08:06.239473 | orchestrator | 2026-01-30 05:08:04 | INFO  | Successfully wrote ClusterShell configuration 2026-01-30 05:08:06.521109 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 05:08:06.521208 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-30 05:08:06.521271 | orchestrator | + local max_attempts=60 2026-01-30 05:08:06.521289 | orchestrator | + local name=kolla-ansible 2026-01-30 05:08:06.521301 | orchestrator | + local attempt_num=1 2026-01-30 05:08:06.521402 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-30 05:08:06.561825 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 05:08:06.561912 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-30 05:08:06.561925 | orchestrator | + local max_attempts=60 2026-01-30 05:08:06.561936 | orchestrator | + local name=osism-ansible 2026-01-30 05:08:06.561946 | orchestrator | + local attempt_num=1 2026-01-30 05:08:06.562901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-30 05:08:06.599029 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-30 05:08:06.599121 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-30 05:08:06.796158 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-30 05:08:06.796242 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796253 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796262 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-30 05:08:06.796275 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-01-30 05:08:06.796283 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796291 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796299 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up About a minute (healthy) 2026-01-30 05:08:06.796307 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 25 seconds ago 2026-01-30 05:08:06.796315 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 2 minutes (healthy) 3306/tcp 2026-01-30 05:08:06.796323 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796331 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 2 minutes (healthy) 6379/tcp 2026-01-30 05:08:06.796338 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796368 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-30 05:08:06.796376 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.796384 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 2 minutes (healthy) 2026-01-30 05:08:06.805379 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-01-30 05:08:06.805454 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-01-30 05:08:06.805463 | orchestrator | + osism apply facts 2026-01-30 05:08:18.920119 | orchestrator | 2026-01-30 05:08:18 | INFO  | Task 83e03089-f7aa-4cb1-9ba1-36ad5890cb31 (facts) was prepared for execution. 2026-01-30 05:08:18.920255 | orchestrator | 2026-01-30 05:08:18 | INFO  | It takes a moment until task 83e03089-f7aa-4cb1-9ba1-36ad5890cb31 (facts) has been started and output is visible here. 2026-01-30 05:08:41.036414 | orchestrator | 2026-01-30 05:08:41.036558 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-30 05:08:41.036576 | orchestrator | 2026-01-30 05:08:41.036587 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-30 05:08:41.036597 | orchestrator | Friday 30 January 2026 05:08:25 +0000 (0:00:01.972) 0:00:01.972 ******** 2026-01-30 05:08:41.036607 | orchestrator | ok: [testbed-manager] 2026-01-30 05:08:41.036618 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:08:41.036627 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:08:41.036637 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:08:41.036643 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:08:41.036649 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:08:41.036654 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:08:41.036659 | orchestrator | 2026-01-30 05:08:41.036665 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-30 05:08:41.036671 | orchestrator | Friday 30 January 2026 05:08:28 +0000 (0:00:03.444) 0:00:05.416 ******** 2026-01-30 05:08:41.036676 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:08:41.036682 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:08:41.036687 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:08:41.036692 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:08:41.036697 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:08:41.036702 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:08:41.036707 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:08:41.036712 | orchestrator | 2026-01-30 05:08:41.036718 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-30 05:08:41.036723 | orchestrator | 2026-01-30 05:08:41.036728 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-30 05:08:41.036733 | orchestrator | Friday 30 January 2026 05:08:30 +0000 (0:00:02.341) 0:00:07.758 ******** 2026-01-30 05:08:41.036738 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:08:41.036759 | orchestrator | ok: [testbed-manager] 2026-01-30 05:08:41.036765 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:08:41.036770 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:08:41.036778 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:08:41.036783 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:08:41.036788 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:08:41.036793 | orchestrator | 2026-01-30 05:08:41.036798 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-30 05:08:41.036803 | orchestrator | 2026-01-30 05:08:41.036809 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-30 05:08:41.036814 | orchestrator | Friday 30 January 2026 05:08:37 +0000 (0:00:06.929) 0:00:14.688 ******** 2026-01-30 05:08:41.036819 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:08:41.036840 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:08:41.036845 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:08:41.036850 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:08:41.036855 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:08:41.036860 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:08:41.036865 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:08:41.036870 | orchestrator | 2026-01-30 05:08:41.036875 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:08:41.036881 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036887 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036892 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036897 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036902 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036907 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036912 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:08:41.036917 | orchestrator | 2026-01-30 05:08:41.036922 | orchestrator | 2026-01-30 05:08:41.036928 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:08:41.036933 | orchestrator | Friday 30 January 2026 05:08:40 +0000 (0:00:02.822) 0:00:17.510 ******** 2026-01-30 05:08:41.036938 | orchestrator | =============================================================================== 2026-01-30 05:08:41.036943 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.93s 2026-01-30 05:08:41.036949 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.44s 2026-01-30 05:08:41.036954 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.82s 2026-01-30 05:08:41.036959 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.34s 2026-01-30 05:08:41.310111 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-01-30 05:08:41.409872 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 05:08:41.410910 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-01-30 05:08:41.455652 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-01-30 05:08:41.455731 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-01-30 05:08:41.461408 | orchestrator | + set -e 2026-01-30 05:08:41.461459 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-01-30 05:08:41.461470 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-30 05:08:41.469980 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-01-30 05:08:41.476841 | orchestrator | 2026-01-30 05:08:41.476909 | orchestrator | # UPGRADE SERVICES 2026-01-30 05:08:41.476924 | orchestrator | 2026-01-30 05:08:41.476936 | orchestrator | + set -e 2026-01-30 05:08:41.476948 | orchestrator | + echo 2026-01-30 05:08:41.476959 | orchestrator | + echo '# UPGRADE SERVICES' 2026-01-30 05:08:41.476970 | orchestrator | + echo 2026-01-30 05:08:41.478666 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 05:08:41.478708 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 05:08:41.478719 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 05:08:41.478730 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 05:08:41.478741 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 05:08:41.478752 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 05:08:41.478764 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 05:08:41.478775 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 05:08:41.478812 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 05:08:41.478823 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 05:08:41.478834 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 05:08:41.478845 | orchestrator | ++ export ARA=false 2026-01-30 05:08:41.478856 | orchestrator | ++ ARA=false 2026-01-30 05:08:41.478866 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 05:08:41.478877 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 05:08:41.478888 | orchestrator | ++ export TEMPEST=false 2026-01-30 05:08:41.478898 | orchestrator | ++ TEMPEST=false 2026-01-30 05:08:41.478909 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 05:08:41.478920 | orchestrator | ++ IS_ZUUL=true 2026-01-30 05:08:41.478931 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 05:08:41.478942 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 05:08:41.478953 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 05:08:41.478963 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 05:08:41.478974 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 05:08:41.478985 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 05:08:41.478995 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 05:08:41.479006 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 05:08:41.479017 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 05:08:41.479028 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 05:08:41.479038 | orchestrator | ++ export RABBITMQ3TO4=true 2026-01-30 05:08:41.479049 | orchestrator | ++ RABBITMQ3TO4=true 2026-01-30 05:08:41.479060 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-01-30 05:08:41.479070 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-01-30 05:08:41.479082 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-01-30 05:08:41.482866 | orchestrator | + set -e 2026-01-30 05:08:41.482922 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 05:08:41.483132 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 05:08:41.483156 | orchestrator | ++ INTERACTIVE=false 2026-01-30 05:08:41.483167 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 05:08:41.483178 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 05:08:41.483394 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 05:08:41.483451 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 05:08:41.483463 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 05:08:41.483782 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 05:08:41.483811 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 05:08:41.483825 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 05:08:41.483836 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 05:08:41.483868 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 05:08:41.483880 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 05:08:41.483891 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 05:08:41.483910 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 05:08:41.483928 | orchestrator | ++ export ARA=false 2026-01-30 05:08:41.483945 | orchestrator | ++ ARA=false 2026-01-30 05:08:41.483956 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 05:08:41.483966 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 05:08:41.483977 | orchestrator | ++ export TEMPEST=false 2026-01-30 05:08:41.483987 | orchestrator | ++ TEMPEST=false 2026-01-30 05:08:41.483998 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 05:08:41.484008 | orchestrator | ++ IS_ZUUL=true 2026-01-30 05:08:41.484019 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 05:08:41.484030 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 05:08:41.484041 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 05:08:41.484051 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 05:08:41.484063 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 05:08:41.484081 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 05:08:41.484098 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 05:08:41.484113 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 05:08:41.484124 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 05:08:41.484135 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 05:08:41.484145 | orchestrator | ++ export RABBITMQ3TO4=true 2026-01-30 05:08:41.484156 | orchestrator | ++ RABBITMQ3TO4=true 2026-01-30 05:08:41.484166 | orchestrator | 2026-01-30 05:08:41.484186 | orchestrator | # PULL IMAGES 2026-01-30 05:08:41.484198 | orchestrator | 2026-01-30 05:08:41.484212 | orchestrator | + echo 2026-01-30 05:08:41.484228 | orchestrator | + echo '# PULL IMAGES' 2026-01-30 05:08:41.484240 | orchestrator | + echo 2026-01-30 05:08:41.485210 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-30 05:08:41.531923 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 05:08:41.531995 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-30 05:08:43.689059 | orchestrator | 2026-01-30 05:08:43 | INFO  | Trying to run play pull-images in environment custom 2026-01-30 05:08:53.839019 | orchestrator | 2026-01-30 05:08:53 | INFO  | Task ad1d3638-0569-45bd-aa29-3d83ecc8f58c (pull-images) was prepared for execution. 2026-01-30 05:08:53.839167 | orchestrator | 2026-01-30 05:08:53 | INFO  | Task ad1d3638-0569-45bd-aa29-3d83ecc8f58c is running in background. No more output. Check ARA for logs. 2026-01-30 05:08:54.188918 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-01-30 05:08:54.199172 | orchestrator | + set -e 2026-01-30 05:08:54.199241 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 05:08:54.199250 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 05:08:54.199258 | orchestrator | ++ INTERACTIVE=false 2026-01-30 05:08:54.199264 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 05:08:54.199271 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 05:08:54.199278 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-30 05:08:54.200707 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-30 05:08:54.209981 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:08:54.210177 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-01-30 05:08:54.210402 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-01-30 05:08:54.263428 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-30 05:08:54.263562 | orchestrator | + osism apply frr 2026-01-30 05:09:06.441558 | orchestrator | 2026-01-30 05:09:06 | INFO  | Task 8fb64d0a-b601-43f8-b700-1d5bb89d6848 (frr) was prepared for execution. 2026-01-30 05:09:06.441647 | orchestrator | 2026-01-30 05:09:06 | INFO  | It takes a moment until task 8fb64d0a-b601-43f8-b700-1d5bb89d6848 (frr) has been started and output is visible here. 2026-01-30 05:09:25.773691 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:09:25.773812 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:09:25.773838 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:09:25.773848 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:09:25.773868 | orchestrator | 2026-01-30 05:09:25.773879 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-30 05:09:25.773888 | orchestrator | 2026-01-30 05:09:25.773898 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-30 05:09:25.773908 | orchestrator | Friday 30 January 2026 05:09:12 +0000 (0:00:01.851) 0:00:01.851 ******** 2026-01-30 05:09:25.773918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 05:09:25.773929 | orchestrator | 2026-01-30 05:09:25.773938 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-30 05:09:25.773948 | orchestrator | Friday 30 January 2026 05:09:13 +0000 (0:00:01.050) 0:00:02.902 ******** 2026-01-30 05:09:25.773958 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.773968 | orchestrator | 2026-01-30 05:09:25.773978 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-30 05:09:25.773988 | orchestrator | Friday 30 January 2026 05:09:14 +0000 (0:00:01.156) 0:00:04.058 ******** 2026-01-30 05:09:25.774000 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774087 | orchestrator | 2026-01-30 05:09:25.774105 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-30 05:09:25.774120 | orchestrator | Friday 30 January 2026 05:09:16 +0000 (0:00:01.693) 0:00:05.751 ******** 2026-01-30 05:09:25.774135 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774152 | orchestrator | 2026-01-30 05:09:25.774169 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-30 05:09:25.774187 | orchestrator | Friday 30 January 2026 05:09:17 +0000 (0:00:00.837) 0:00:06.589 ******** 2026-01-30 05:09:25.774225 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774237 | orchestrator | 2026-01-30 05:09:25.774249 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-30 05:09:25.774260 | orchestrator | Friday 30 January 2026 05:09:18 +0000 (0:00:00.797) 0:00:07.386 ******** 2026-01-30 05:09:25.774272 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774283 | orchestrator | 2026-01-30 05:09:25.774295 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-30 05:09:25.774306 | orchestrator | Friday 30 January 2026 05:09:19 +0000 (0:00:01.285) 0:00:08.672 ******** 2026-01-30 05:09:25.774318 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:09:25.774328 | orchestrator | 2026-01-30 05:09:25.774338 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-30 05:09:25.774348 | orchestrator | Friday 30 January 2026 05:09:19 +0000 (0:00:00.169) 0:00:08.841 ******** 2026-01-30 05:09:25.774357 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:09:25.774417 | orchestrator | 2026-01-30 05:09:25.774427 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-30 05:09:25.774437 | orchestrator | Friday 30 January 2026 05:09:19 +0000 (0:00:00.166) 0:00:09.008 ******** 2026-01-30 05:09:25.774446 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774456 | orchestrator | 2026-01-30 05:09:25.774584 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-30 05:09:25.774601 | orchestrator | Friday 30 January 2026 05:09:20 +0000 (0:00:00.994) 0:00:10.003 ******** 2026-01-30 05:09:25.774616 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-30 05:09:25.774651 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-30 05:09:25.774671 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-30 05:09:25.774688 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-30 05:09:25.774698 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-30 05:09:25.774708 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-30 05:09:25.774718 | orchestrator | 2026-01-30 05:09:25.774728 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-30 05:09:25.774738 | orchestrator | Friday 30 January 2026 05:09:23 +0000 (0:00:02.733) 0:00:12.736 ******** 2026-01-30 05:09:25.774747 | orchestrator | ok: [testbed-manager] 2026-01-30 05:09:25.774757 | orchestrator | 2026-01-30 05:09:25.774767 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:09:25.774776 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 05:09:25.774791 | orchestrator | 2026-01-30 05:09:25.774807 | orchestrator | 2026-01-30 05:09:25.774825 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:09:25.774842 | orchestrator | Friday 30 January 2026 05:09:25 +0000 (0:00:01.748) 0:00:14.485 ******** 2026-01-30 05:09:25.774858 | orchestrator | =============================================================================== 2026-01-30 05:09:25.774870 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.73s 2026-01-30 05:09:25.774885 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.75s 2026-01-30 05:09:25.774927 | orchestrator | osism.services.frr : Install frr package -------------------------------- 1.69s 2026-01-30 05:09:25.774945 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.29s 2026-01-30 05:09:25.774960 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.16s 2026-01-30 05:09:25.774976 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.05s 2026-01-30 05:09:25.774991 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.99s 2026-01-30 05:09:25.775024 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.84s 2026-01-30 05:09:25.775040 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.80s 2026-01-30 05:09:25.775057 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-01-30 05:09:25.775073 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.17s 2026-01-30 05:09:26.184821 | orchestrator | + osism apply kubernetes 2026-01-30 05:09:28.240108 | orchestrator | 2026-01-30 05:09:28 | INFO  | Task c44027a5-fcef-4604-acc0-8e5b8c9601bb (kubernetes) was prepared for execution. 2026-01-30 05:09:28.240230 | orchestrator | 2026-01-30 05:09:28 | INFO  | It takes a moment until task c44027a5-fcef-4604-acc0-8e5b8c9601bb (kubernetes) has been started and output is visible here. 2026-01-30 05:10:12.733365 | orchestrator | 2026-01-30 05:10:12.733587 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-30 05:10:12.733608 | orchestrator | 2026-01-30 05:10:12.733621 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-30 05:10:12.733633 | orchestrator | Friday 30 January 2026 05:09:34 +0000 (0:00:01.816) 0:00:01.816 ******** 2026-01-30 05:10:12.733644 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.733656 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.733705 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.733718 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.733729 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.733740 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.733751 | orchestrator | 2026-01-30 05:10:12.733762 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-30 05:10:12.733773 | orchestrator | Friday 30 January 2026 05:09:38 +0000 (0:00:04.582) 0:00:06.399 ******** 2026-01-30 05:10:12.733784 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.733796 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.733807 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.733818 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.733831 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.733843 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.733855 | orchestrator | 2026-01-30 05:10:12.733867 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-30 05:10:12.733880 | orchestrator | Friday 30 January 2026 05:09:41 +0000 (0:00:02.092) 0:00:08.491 ******** 2026-01-30 05:10:12.733893 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.733906 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.733919 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.733932 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.733944 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.733957 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.733968 | orchestrator | 2026-01-30 05:10:12.733981 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-30 05:10:12.733993 | orchestrator | Friday 30 January 2026 05:09:43 +0000 (0:00:02.440) 0:00:10.932 ******** 2026-01-30 05:10:12.734006 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.734069 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.734083 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.734095 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.734108 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.734120 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.734163 | orchestrator | 2026-01-30 05:10:12.734177 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-30 05:10:12.734190 | orchestrator | Friday 30 January 2026 05:09:46 +0000 (0:00:02.629) 0:00:13.561 ******** 2026-01-30 05:10:12.734202 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.734215 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.734226 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.734237 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.734270 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.734281 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.734292 | orchestrator | 2026-01-30 05:10:12.734303 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-30 05:10:12.734314 | orchestrator | Friday 30 January 2026 05:09:48 +0000 (0:00:02.233) 0:00:15.795 ******** 2026-01-30 05:10:12.734325 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.734335 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.734346 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.734356 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.734367 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.734378 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.734389 | orchestrator | 2026-01-30 05:10:12.734400 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-30 05:10:12.734411 | orchestrator | Friday 30 January 2026 05:09:50 +0000 (0:00:02.323) 0:00:18.119 ******** 2026-01-30 05:10:12.734421 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.734454 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.734465 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.734476 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.734488 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.734499 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.734509 | orchestrator | 2026-01-30 05:10:12.734520 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-30 05:10:12.734531 | orchestrator | Friday 30 January 2026 05:09:52 +0000 (0:00:02.023) 0:00:20.143 ******** 2026-01-30 05:10:12.734542 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.734552 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.734563 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.734573 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.734596 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.734608 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.734619 | orchestrator | 2026-01-30 05:10:12.734629 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-30 05:10:12.734640 | orchestrator | Friday 30 January 2026 05:09:54 +0000 (0:00:02.056) 0:00:22.200 ******** 2026-01-30 05:10:12.734651 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734662 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734673 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.734683 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734694 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734705 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.734716 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734726 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734737 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.734748 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734759 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734769 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.734801 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734813 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734823 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.734834 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-30 05:10:12.734844 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-30 05:10:12.734855 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.734866 | orchestrator | 2026-01-30 05:10:12.734887 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-30 05:10:12.734898 | orchestrator | Friday 30 January 2026 05:09:57 +0000 (0:00:02.275) 0:00:24.475 ******** 2026-01-30 05:10:12.734908 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.734919 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.734930 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.734940 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.734951 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.734961 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.734972 | orchestrator | 2026-01-30 05:10:12.734983 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-30 05:10:12.734995 | orchestrator | Friday 30 January 2026 05:09:59 +0000 (0:00:02.070) 0:00:26.546 ******** 2026-01-30 05:10:12.735006 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.735017 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.735027 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.735038 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.735049 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.735059 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.735070 | orchestrator | 2026-01-30 05:10:12.735080 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-30 05:10:12.735091 | orchestrator | Friday 30 January 2026 05:10:01 +0000 (0:00:02.052) 0:00:28.599 ******** 2026-01-30 05:10:12.735101 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:10:12.735112 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:10:12.735123 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:10:12.735133 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:10:12.735144 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:10:12.735154 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:10:12.735165 | orchestrator | 2026-01-30 05:10:12.735175 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-30 05:10:12.735186 | orchestrator | Friday 30 January 2026 05:10:03 +0000 (0:00:02.750) 0:00:31.350 ******** 2026-01-30 05:10:12.735197 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.735207 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.735218 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.735229 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.735239 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.735250 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.735260 | orchestrator | 2026-01-30 05:10:12.735271 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-30 05:10:12.735282 | orchestrator | Friday 30 January 2026 05:10:05 +0000 (0:00:02.010) 0:00:33.360 ******** 2026-01-30 05:10:12.735292 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.735303 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.735314 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.735324 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.735335 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.735346 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.735356 | orchestrator | 2026-01-30 05:10:12.735367 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-30 05:10:12.735379 | orchestrator | Friday 30 January 2026 05:10:08 +0000 (0:00:02.128) 0:00:35.489 ******** 2026-01-30 05:10:12.735390 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.735405 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.735416 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.735469 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.735481 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.735492 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.735502 | orchestrator | 2026-01-30 05:10:12.735513 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-30 05:10:12.735523 | orchestrator | Friday 30 January 2026 05:10:10 +0000 (0:00:01.949) 0:00:37.438 ******** 2026-01-30 05:10:12.735543 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-30 05:10:12.735554 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-30 05:10:12.735565 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.735575 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-30 05:10:12.735586 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-30 05:10:12.735596 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.735607 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-30 05:10:12.735617 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-30 05:10:12.735628 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:10:12.735639 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-30 05:10:12.735649 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-30 05:10:12.735660 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:10:12.735670 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-30 05:10:12.735681 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-30 05:10:12.735691 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:10:12.735702 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-30 05:10:12.735712 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-30 05:10:12.735723 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:10:12.735734 | orchestrator | 2026-01-30 05:10:12.735744 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-30 05:10:12.735755 | orchestrator | Friday 30 January 2026 05:10:12 +0000 (0:00:02.219) 0:00:39.657 ******** 2026-01-30 05:10:12.735766 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:10:12.735776 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:10:12.735795 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:11:51.332664 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.332791 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.332811 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.332828 | orchestrator | 2026-01-30 05:11:51.332847 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-30 05:11:51.332865 | orchestrator | Friday 30 January 2026 05:10:14 +0000 (0:00:02.625) 0:00:42.283 ******** 2026-01-30 05:11:51.332880 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:11:51.332895 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:11:51.332909 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:11:51.332923 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.332937 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.332951 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.332966 | orchestrator | 2026-01-30 05:11:51.332983 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-30 05:11:51.332999 | orchestrator | 2026-01-30 05:11:51.333017 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-30 05:11:51.333036 | orchestrator | Friday 30 January 2026 05:10:17 +0000 (0:00:02.633) 0:00:44.917 ******** 2026-01-30 05:11:51.333053 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333070 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333101 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333118 | orchestrator | 2026-01-30 05:11:51.333141 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-30 05:11:51.333158 | orchestrator | Friday 30 January 2026 05:10:19 +0000 (0:00:01.880) 0:00:46.797 ******** 2026-01-30 05:11:51.333175 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333191 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333209 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333228 | orchestrator | 2026-01-30 05:11:51.333246 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-30 05:11:51.333265 | orchestrator | Friday 30 January 2026 05:10:21 +0000 (0:00:02.101) 0:00:48.899 ******** 2026-01-30 05:11:51.333306 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.333325 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:11:51.333343 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:11:51.333388 | orchestrator | 2026-01-30 05:11:51.333407 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-30 05:11:51.333426 | orchestrator | Friday 30 January 2026 05:10:23 +0000 (0:00:02.189) 0:00:51.088 ******** 2026-01-30 05:11:51.333444 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333462 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333480 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333497 | orchestrator | 2026-01-30 05:11:51.333514 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-30 05:11:51.333531 | orchestrator | Friday 30 January 2026 05:10:25 +0000 (0:00:01.975) 0:00:53.064 ******** 2026-01-30 05:11:51.333548 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.333565 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.333582 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.333599 | orchestrator | 2026-01-30 05:11:51.333615 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-30 05:11:51.333632 | orchestrator | Friday 30 January 2026 05:10:27 +0000 (0:00:01.343) 0:00:54.407 ******** 2026-01-30 05:11:51.333650 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333667 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333683 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333700 | orchestrator | 2026-01-30 05:11:51.333717 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-30 05:11:51.333734 | orchestrator | Friday 30 January 2026 05:10:28 +0000 (0:00:01.693) 0:00:56.101 ******** 2026-01-30 05:11:51.333750 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333767 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333784 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333801 | orchestrator | 2026-01-30 05:11:51.333818 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-30 05:11:51.333835 | orchestrator | Friday 30 January 2026 05:10:30 +0000 (0:00:02.174) 0:00:58.276 ******** 2026-01-30 05:11:51.333852 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:11:51.333868 | orchestrator | 2026-01-30 05:11:51.333885 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-30 05:11:51.333902 | orchestrator | Friday 30 January 2026 05:10:32 +0000 (0:00:01.960) 0:01:00.237 ******** 2026-01-30 05:11:51.333918 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.333935 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.333952 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.333968 | orchestrator | 2026-01-30 05:11:51.333985 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-30 05:11:51.334002 | orchestrator | Friday 30 January 2026 05:10:35 +0000 (0:00:02.370) 0:01:02.607 ******** 2026-01-30 05:11:51.334113 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.334134 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.334151 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.334167 | orchestrator | 2026-01-30 05:11:51.334184 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-30 05:11:51.334201 | orchestrator | Friday 30 January 2026 05:10:36 +0000 (0:00:01.715) 0:01:04.323 ******** 2026-01-30 05:11:51.334218 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.334235 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.334252 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.334269 | orchestrator | 2026-01-30 05:11:51.334285 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-30 05:11:51.334301 | orchestrator | Friday 30 January 2026 05:10:38 +0000 (0:00:01.898) 0:01:06.221 ******** 2026-01-30 05:11:51.334456 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.334479 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.334497 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.334527 | orchestrator | 2026-01-30 05:11:51.334545 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-30 05:11:51.334563 | orchestrator | Friday 30 January 2026 05:10:41 +0000 (0:00:02.460) 0:01:08.681 ******** 2026-01-30 05:11:51.334582 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.334599 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.334640 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.334659 | orchestrator | 2026-01-30 05:11:51.334677 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-30 05:11:51.334695 | orchestrator | Friday 30 January 2026 05:10:42 +0000 (0:00:01.363) 0:01:10.045 ******** 2026-01-30 05:11:51.334712 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.334730 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.334748 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.334766 | orchestrator | 2026-01-30 05:11:51.334784 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-30 05:11:51.334802 | orchestrator | Friday 30 January 2026 05:10:44 +0000 (0:00:01.526) 0:01:11.572 ******** 2026-01-30 05:11:51.334820 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.334837 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:11:51.334855 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:11:51.334873 | orchestrator | 2026-01-30 05:11:51.334891 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-30 05:11:51.334909 | orchestrator | Friday 30 January 2026 05:10:46 +0000 (0:00:02.163) 0:01:13.736 ******** 2026-01-30 05:11:51.334927 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.334945 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.334963 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.334980 | orchestrator | 2026-01-30 05:11:51.334997 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-30 05:11:51.335014 | orchestrator | Friday 30 January 2026 05:10:48 +0000 (0:00:01.876) 0:01:15.612 ******** 2026-01-30 05:11:51.335031 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.335049 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.335067 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.335085 | orchestrator | 2026-01-30 05:11:51.335103 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-30 05:11:51.335120 | orchestrator | Friday 30 January 2026 05:10:49 +0000 (0:00:01.361) 0:01:16.974 ******** 2026-01-30 05:11:51.335139 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 05:11:51.335158 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 05:11:51.335175 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-30 05:11:51.335192 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 05:11:51.335210 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 05:11:51.335228 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-30 05:11:51.335246 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.335264 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.335282 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.335300 | orchestrator | 2026-01-30 05:11:51.335318 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-30 05:11:51.335335 | orchestrator | Friday 30 January 2026 05:11:13 +0000 (0:00:23.481) 0:01:40.456 ******** 2026-01-30 05:11:51.335353 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:11:51.335395 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:11:51.335423 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:11:51.335440 | orchestrator | 2026-01-30 05:11:51.335457 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-30 05:11:51.335474 | orchestrator | Friday 30 January 2026 05:11:14 +0000 (0:00:01.320) 0:01:41.777 ******** 2026-01-30 05:11:51.335490 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.335507 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:11:51.335523 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:11:51.335540 | orchestrator | 2026-01-30 05:11:51.335557 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-30 05:11:51.335573 | orchestrator | Friday 30 January 2026 05:11:16 +0000 (0:00:02.138) 0:01:43.915 ******** 2026-01-30 05:11:51.335589 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.335606 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.335622 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.335639 | orchestrator | 2026-01-30 05:11:51.335655 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-30 05:11:51.335672 | orchestrator | Friday 30 January 2026 05:11:18 +0000 (0:00:02.290) 0:01:46.205 ******** 2026-01-30 05:11:51.335689 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:11:51.335705 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:11:51.335722 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.335738 | orchestrator | 2026-01-30 05:11:51.335754 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-30 05:11:51.335771 | orchestrator | Friday 30 January 2026 05:11:45 +0000 (0:00:27.110) 0:02:13.316 ******** 2026-01-30 05:11:51.335788 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.335804 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.335820 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.335837 | orchestrator | 2026-01-30 05:11:51.335862 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-30 05:11:51.335879 | orchestrator | Friday 30 January 2026 05:11:47 +0000 (0:00:01.725) 0:02:15.042 ******** 2026-01-30 05:11:51.335895 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:11:51.335912 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:11:51.335929 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:11:51.335945 | orchestrator | 2026-01-30 05:11:51.335962 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-30 05:11:51.335978 | orchestrator | Friday 30 January 2026 05:11:49 +0000 (0:00:01.734) 0:02:16.776 ******** 2026-01-30 05:11:51.335995 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:11:51.336011 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:11:51.336027 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:11:51.336043 | orchestrator | 2026-01-30 05:11:51.336069 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-30 05:12:39.825071 | orchestrator | Friday 30 January 2026 05:11:51 +0000 (0:00:01.933) 0:02:18.710 ******** 2026-01-30 05:12:39.825173 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:12:39.825187 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:12:39.825197 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:12:39.825206 | orchestrator | 2026-01-30 05:12:39.825215 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-30 05:12:39.825225 | orchestrator | Friday 30 January 2026 05:11:52 +0000 (0:00:01.662) 0:02:20.372 ******** 2026-01-30 05:12:39.825234 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:12:39.825242 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:12:39.825251 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:12:39.825259 | orchestrator | 2026-01-30 05:12:39.825268 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-30 05:12:39.825277 | orchestrator | Friday 30 January 2026 05:11:54 +0000 (0:00:01.391) 0:02:21.764 ******** 2026-01-30 05:12:39.825287 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:12:39.825296 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:12:39.825305 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:12:39.825314 | orchestrator | 2026-01-30 05:12:39.825322 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-30 05:12:39.825425 | orchestrator | Friday 30 January 2026 05:11:56 +0000 (0:00:01.778) 0:02:23.543 ******** 2026-01-30 05:12:39.825449 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:12:39.825458 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:12:39.825467 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:12:39.825475 | orchestrator | 2026-01-30 05:12:39.825484 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-30 05:12:39.825492 | orchestrator | Friday 30 January 2026 05:11:58 +0000 (0:00:02.037) 0:02:25.580 ******** 2026-01-30 05:12:39.825501 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:12:39.825510 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:12:39.825518 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:12:39.825527 | orchestrator | 2026-01-30 05:12:39.825535 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-30 05:12:39.825544 | orchestrator | Friday 30 January 2026 05:11:59 +0000 (0:00:01.765) 0:02:27.346 ******** 2026-01-30 05:12:39.825565 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:12:39.825574 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:12:39.825583 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:12:39.825601 | orchestrator | 2026-01-30 05:12:39.825609 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-30 05:12:39.825618 | orchestrator | Friday 30 January 2026 05:12:01 +0000 (0:00:01.906) 0:02:29.253 ******** 2026-01-30 05:12:39.825628 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:12:39.825638 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:12:39.825648 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:12:39.825658 | orchestrator | 2026-01-30 05:12:39.825668 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-30 05:12:39.825678 | orchestrator | Friday 30 January 2026 05:12:03 +0000 (0:00:01.326) 0:02:30.579 ******** 2026-01-30 05:12:39.825687 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:12:39.825697 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:12:39.825708 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:12:39.825717 | orchestrator | 2026-01-30 05:12:39.825727 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-30 05:12:39.825737 | orchestrator | Friday 30 January 2026 05:12:04 +0000 (0:00:01.360) 0:02:31.940 ******** 2026-01-30 05:12:39.825747 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:12:39.825756 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:12:39.825766 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:12:39.825776 | orchestrator | 2026-01-30 05:12:39.825785 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-30 05:12:39.825795 | orchestrator | Friday 30 January 2026 05:12:06 +0000 (0:00:01.716) 0:02:33.656 ******** 2026-01-30 05:12:39.825805 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:12:39.825815 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:12:39.825825 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:12:39.825834 | orchestrator | 2026-01-30 05:12:39.825845 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-30 05:12:39.825855 | orchestrator | Friday 30 January 2026 05:12:07 +0000 (0:00:01.689) 0:02:35.346 ******** 2026-01-30 05:12:39.825865 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 05:12:39.825876 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 05:12:39.825885 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 05:12:39.825895 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 05:12:39.825905 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-30 05:12:39.825915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 05:12:39.825932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 05:12:39.825943 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-30 05:12:39.825952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 05:12:39.825962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-30 05:12:39.825972 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-30 05:12:39.825982 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 05:12:39.826005 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-30 05:12:39.826068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 05:12:39.826080 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 05:12:39.826089 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 05:12:39.826097 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-30 05:12:39.826106 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 05:12:39.826115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-30 05:12:39.826123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-30 05:12:39.826132 | orchestrator | 2026-01-30 05:12:39.826141 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-30 05:12:39.826150 | orchestrator | 2026-01-30 05:12:39.826158 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-30 05:12:39.826167 | orchestrator | Friday 30 January 2026 05:12:12 +0000 (0:00:04.728) 0:02:40.075 ******** 2026-01-30 05:12:39.826176 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826184 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826193 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826202 | orchestrator | 2026-01-30 05:12:39.826210 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-30 05:12:39.826219 | orchestrator | Friday 30 January 2026 05:12:14 +0000 (0:00:01.346) 0:02:41.421 ******** 2026-01-30 05:12:39.826228 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826237 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826245 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826253 | orchestrator | 2026-01-30 05:12:39.826262 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-30 05:12:39.826271 | orchestrator | Friday 30 January 2026 05:12:15 +0000 (0:00:01.692) 0:02:43.113 ******** 2026-01-30 05:12:39.826280 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826288 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826297 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826305 | orchestrator | 2026-01-30 05:12:39.826314 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-30 05:12:39.826354 | orchestrator | Friday 30 January 2026 05:12:17 +0000 (0:00:01.543) 0:02:44.657 ******** 2026-01-30 05:12:39.826363 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:12:39.826372 | orchestrator | 2026-01-30 05:12:39.826381 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-30 05:12:39.826390 | orchestrator | Friday 30 January 2026 05:12:18 +0000 (0:00:01.714) 0:02:46.371 ******** 2026-01-30 05:12:39.826398 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:12:39.826407 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:12:39.826416 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:12:39.826432 | orchestrator | 2026-01-30 05:12:39.826441 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-30 05:12:39.826449 | orchestrator | Friday 30 January 2026 05:12:20 +0000 (0:00:01.336) 0:02:47.708 ******** 2026-01-30 05:12:39.826458 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:12:39.826466 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:12:39.826475 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:12:39.826483 | orchestrator | 2026-01-30 05:12:39.826492 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-30 05:12:39.826501 | orchestrator | Friday 30 January 2026 05:12:21 +0000 (0:00:01.370) 0:02:49.078 ******** 2026-01-30 05:12:39.826509 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:12:39.826518 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:12:39.826527 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:12:39.826535 | orchestrator | 2026-01-30 05:12:39.826544 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-30 05:12:39.826553 | orchestrator | Friday 30 January 2026 05:12:23 +0000 (0:00:01.324) 0:02:50.403 ******** 2026-01-30 05:12:39.826561 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826570 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826579 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826587 | orchestrator | 2026-01-30 05:12:39.826596 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-30 05:12:39.826612 | orchestrator | Friday 30 January 2026 05:12:24 +0000 (0:00:01.653) 0:02:52.056 ******** 2026-01-30 05:12:39.826621 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826630 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826639 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826647 | orchestrator | 2026-01-30 05:12:39.826656 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-30 05:12:39.826674 | orchestrator | Friday 30 January 2026 05:12:27 +0000 (0:00:02.469) 0:02:54.525 ******** 2026-01-30 05:12:39.826684 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:12:39.826791 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:12:39.826803 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:12:39.826839 | orchestrator | 2026-01-30 05:12:39.826849 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-30 05:12:39.826858 | orchestrator | Friday 30 January 2026 05:12:29 +0000 (0:00:02.262) 0:02:56.788 ******** 2026-01-30 05:12:39.826866 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:12:39.826875 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:12:39.826884 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:12:39.826892 | orchestrator | 2026-01-30 05:12:39.826901 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-30 05:12:39.826910 | orchestrator | 2026-01-30 05:12:39.826918 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-30 05:12:39.826927 | orchestrator | Friday 30 January 2026 05:12:37 +0000 (0:00:08.305) 0:03:05.094 ******** 2026-01-30 05:12:39.826936 | orchestrator | ok: [testbed-manager] 2026-01-30 05:12:39.826944 | orchestrator | 2026-01-30 05:12:39.826953 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-30 05:12:39.826971 | orchestrator | Friday 30 January 2026 05:12:39 +0000 (0:00:02.116) 0:03:07.211 ******** 2026-01-30 05:13:48.603015 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603107 | orchestrator | 2026-01-30 05:13:48.603118 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-30 05:13:48.603126 | orchestrator | Friday 30 January 2026 05:12:41 +0000 (0:00:01.393) 0:03:08.604 ******** 2026-01-30 05:13:48.603136 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-30 05:13:48.603146 | orchestrator | 2026-01-30 05:13:48.603157 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-30 05:13:48.603166 | orchestrator | Friday 30 January 2026 05:12:42 +0000 (0:00:01.559) 0:03:10.164 ******** 2026-01-30 05:13:48.603177 | orchestrator | changed: [testbed-manager] 2026-01-30 05:13:48.603261 | orchestrator | 2026-01-30 05:13:48.603275 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-30 05:13:48.603285 | orchestrator | Friday 30 January 2026 05:12:44 +0000 (0:00:01.867) 0:03:12.031 ******** 2026-01-30 05:13:48.603295 | orchestrator | changed: [testbed-manager] 2026-01-30 05:13:48.603304 | orchestrator | 2026-01-30 05:13:48.603314 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-30 05:13:48.603350 | orchestrator | Friday 30 January 2026 05:12:46 +0000 (0:00:01.549) 0:03:13.581 ******** 2026-01-30 05:13:48.603361 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-30 05:13:48.603371 | orchestrator | 2026-01-30 05:13:48.603382 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-30 05:13:48.603392 | orchestrator | Friday 30 January 2026 05:12:49 +0000 (0:00:02.887) 0:03:16.469 ******** 2026-01-30 05:13:48.603402 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-30 05:13:48.603413 | orchestrator | 2026-01-30 05:13:48.603422 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-30 05:13:48.603433 | orchestrator | Friday 30 January 2026 05:12:50 +0000 (0:00:01.793) 0:03:18.263 ******** 2026-01-30 05:13:48.603444 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603455 | orchestrator | 2026-01-30 05:13:48.603466 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-30 05:13:48.603478 | orchestrator | Friday 30 January 2026 05:12:52 +0000 (0:00:01.422) 0:03:19.686 ******** 2026-01-30 05:13:48.603484 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603491 | orchestrator | 2026-01-30 05:13:48.603497 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-30 05:13:48.603503 | orchestrator | 2026-01-30 05:13:48.603509 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-30 05:13:48.603517 | orchestrator | Friday 30 January 2026 05:12:53 +0000 (0:00:01.706) 0:03:21.392 ******** 2026-01-30 05:13:48.603527 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603536 | orchestrator | 2026-01-30 05:13:48.603555 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-30 05:13:48.603565 | orchestrator | Friday 30 January 2026 05:12:55 +0000 (0:00:01.114) 0:03:22.507 ******** 2026-01-30 05:13:48.603575 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 05:13:48.603586 | orchestrator | 2026-01-30 05:13:48.603595 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-30 05:13:48.603605 | orchestrator | Friday 30 January 2026 05:12:56 +0000 (0:00:01.539) 0:03:24.046 ******** 2026-01-30 05:13:48.603616 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603625 | orchestrator | 2026-01-30 05:13:48.603635 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-30 05:13:48.603646 | orchestrator | Friday 30 January 2026 05:12:58 +0000 (0:00:01.852) 0:03:25.899 ******** 2026-01-30 05:13:48.603656 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603666 | orchestrator | 2026-01-30 05:13:48.603677 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-30 05:13:48.603686 | orchestrator | Friday 30 January 2026 05:13:01 +0000 (0:00:02.843) 0:03:28.743 ******** 2026-01-30 05:13:48.603693 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603700 | orchestrator | 2026-01-30 05:13:48.603707 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-30 05:13:48.603714 | orchestrator | Friday 30 January 2026 05:13:02 +0000 (0:00:01.458) 0:03:30.202 ******** 2026-01-30 05:13:48.603721 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603727 | orchestrator | 2026-01-30 05:13:48.603734 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-30 05:13:48.603742 | orchestrator | Friday 30 January 2026 05:13:04 +0000 (0:00:01.480) 0:03:31.682 ******** 2026-01-30 05:13:48.603749 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603756 | orchestrator | 2026-01-30 05:13:48.603763 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-30 05:13:48.603778 | orchestrator | Friday 30 January 2026 05:13:05 +0000 (0:00:01.666) 0:03:33.349 ******** 2026-01-30 05:13:48.603785 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603792 | orchestrator | 2026-01-30 05:13:48.603799 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-30 05:13:48.603806 | orchestrator | Friday 30 January 2026 05:13:08 +0000 (0:00:02.598) 0:03:35.947 ******** 2026-01-30 05:13:48.603813 | orchestrator | ok: [testbed-manager] 2026-01-30 05:13:48.603819 | orchestrator | 2026-01-30 05:13:48.603826 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-30 05:13:48.603833 | orchestrator | 2026-01-30 05:13:48.603840 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-30 05:13:48.603847 | orchestrator | Friday 30 January 2026 05:13:10 +0000 (0:00:01.794) 0:03:37.741 ******** 2026-01-30 05:13:48.603855 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:13:48.603862 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:13:48.603868 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:13:48.603875 | orchestrator | 2026-01-30 05:13:48.603882 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-30 05:13:48.603889 | orchestrator | Friday 30 January 2026 05:13:11 +0000 (0:00:01.342) 0:03:39.084 ******** 2026-01-30 05:13:48.603896 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:13:48.603904 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:13:48.603911 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:13:48.603918 | orchestrator | 2026-01-30 05:13:48.603942 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-30 05:13:48.603949 | orchestrator | Friday 30 January 2026 05:13:13 +0000 (0:00:01.675) 0:03:40.760 ******** 2026-01-30 05:13:48.603955 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:13:48.603962 | orchestrator | 2026-01-30 05:13:48.603968 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-30 05:13:48.603974 | orchestrator | Friday 30 January 2026 05:13:15 +0000 (0:00:01.709) 0:03:42.469 ******** 2026-01-30 05:13:48.603980 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.603986 | orchestrator | 2026-01-30 05:13:48.603992 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-30 05:13:48.603999 | orchestrator | Friday 30 January 2026 05:13:16 +0000 (0:00:01.783) 0:03:44.253 ******** 2026-01-30 05:13:48.604005 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604011 | orchestrator | 2026-01-30 05:13:48.604017 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-30 05:13:48.604023 | orchestrator | Friday 30 January 2026 05:13:18 +0000 (0:00:01.762) 0:03:46.016 ******** 2026-01-30 05:13:48.604029 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:13:48.604035 | orchestrator | 2026-01-30 05:13:48.604042 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-30 05:13:48.604048 | orchestrator | Friday 30 January 2026 05:13:19 +0000 (0:00:01.149) 0:03:47.166 ******** 2026-01-30 05:13:48.604054 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604060 | orchestrator | 2026-01-30 05:13:48.604067 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-30 05:13:48.604073 | orchestrator | Friday 30 January 2026 05:13:21 +0000 (0:00:01.972) 0:03:49.138 ******** 2026-01-30 05:13:48.604079 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604085 | orchestrator | 2026-01-30 05:13:48.604091 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-30 05:13:48.604097 | orchestrator | Friday 30 January 2026 05:13:23 +0000 (0:00:02.116) 0:03:51.255 ******** 2026-01-30 05:13:48.604103 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604109 | orchestrator | 2026-01-30 05:13:48.604116 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-30 05:13:48.604122 | orchestrator | Friday 30 January 2026 05:13:24 +0000 (0:00:01.129) 0:03:52.384 ******** 2026-01-30 05:13:48.604133 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604139 | orchestrator | 2026-01-30 05:13:48.604145 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-30 05:13:48.604151 | orchestrator | Friday 30 January 2026 05:13:26 +0000 (0:00:01.124) 0:03:53.508 ******** 2026-01-30 05:13:48.604157 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-01-30 05:13:48.604163 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-01-30 05:13:48.604171 | orchestrator | } 2026-01-30 05:13:48.604179 | orchestrator | 2026-01-30 05:13:48.604189 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-30 05:13:48.604237 | orchestrator | Friday 30 January 2026 05:13:27 +0000 (0:00:01.113) 0:03:54.622 ******** 2026-01-30 05:13:48.604248 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:13:48.604258 | orchestrator | 2026-01-30 05:13:48.604268 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-30 05:13:48.604278 | orchestrator | Friday 30 January 2026 05:13:28 +0000 (0:00:01.131) 0:03:55.753 ******** 2026-01-30 05:13:48.604287 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-30 05:13:48.604298 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-30 05:13:48.604308 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-30 05:13:48.604319 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-30 05:13:48.604330 | orchestrator | 2026-01-30 05:13:48.604337 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-30 05:13:48.604344 | orchestrator | Friday 30 January 2026 05:13:33 +0000 (0:00:05.445) 0:04:01.199 ******** 2026-01-30 05:13:48.604350 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604356 | orchestrator | 2026-01-30 05:13:48.604362 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-30 05:13:48.604368 | orchestrator | Friday 30 January 2026 05:13:36 +0000 (0:00:02.375) 0:04:03.575 ******** 2026-01-30 05:13:48.604374 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604380 | orchestrator | 2026-01-30 05:13:48.604386 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-30 05:13:48.604392 | orchestrator | Friday 30 January 2026 05:13:38 +0000 (0:00:02.593) 0:04:06.169 ******** 2026-01-30 05:13:48.604398 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-30 05:13:48.604404 | orchestrator | 2026-01-30 05:13:48.604414 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-30 05:13:48.604436 | orchestrator | Friday 30 January 2026 05:13:43 +0000 (0:00:04.539) 0:04:10.708 ******** 2026-01-30 05:13:48.604451 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:13:48.604460 | orchestrator | 2026-01-30 05:13:48.604470 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-30 05:13:48.604479 | orchestrator | Friday 30 January 2026 05:13:44 +0000 (0:00:01.101) 0:04:11.810 ******** 2026-01-30 05:13:48.604489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-30 05:13:48.604500 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-30 05:13:48.604510 | orchestrator | 2026-01-30 05:13:48.604520 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-30 05:13:48.604530 | orchestrator | Friday 30 January 2026 05:13:47 +0000 (0:00:02.837) 0:04:14.648 ******** 2026-01-30 05:13:48.604540 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:13:48.604559 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:14:13.597590 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:14:13.597685 | orchestrator | 2026-01-30 05:14:13.597696 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-30 05:14:13.597704 | orchestrator | Friday 30 January 2026 05:13:48 +0000 (0:00:01.342) 0:04:15.991 ******** 2026-01-30 05:14:13.597731 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:14:13.597739 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:14:13.597746 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:14:13.597753 | orchestrator | 2026-01-30 05:14:13.597760 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-30 05:14:13.597767 | orchestrator | 2026-01-30 05:14:13.597773 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-30 05:14:13.597782 | orchestrator | Friday 30 January 2026 05:13:50 +0000 (0:00:01.993) 0:04:17.984 ******** 2026-01-30 05:14:13.597794 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:13.597806 | orchestrator | 2026-01-30 05:14:13.597817 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-30 05:14:13.597829 | orchestrator | Friday 30 January 2026 05:13:51 +0000 (0:00:01.158) 0:04:19.142 ******** 2026-01-30 05:14:13.597856 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-30 05:14:13.597869 | orchestrator | 2026-01-30 05:14:13.597880 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-30 05:14:13.597892 | orchestrator | Friday 30 January 2026 05:13:53 +0000 (0:00:01.461) 0:04:20.603 ******** 2026-01-30 05:14:13.597904 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:13.597916 | orchestrator | 2026-01-30 05:14:13.597928 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-30 05:14:13.597936 | orchestrator | 2026-01-30 05:14:13.597943 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-30 05:14:13.597949 | orchestrator | Friday 30 January 2026 05:13:58 +0000 (0:00:05.336) 0:04:25.940 ******** 2026-01-30 05:14:13.597956 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:14:13.597963 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:14:13.597969 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:14:13.597976 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:14:13.597982 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:14:13.597989 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:14:13.597995 | orchestrator | 2026-01-30 05:14:13.598002 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-30 05:14:13.598008 | orchestrator | Friday 30 January 2026 05:14:00 +0000 (0:00:01.936) 0:04:27.877 ******** 2026-01-30 05:14:13.598063 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 05:14:13.598071 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 05:14:13.598078 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 05:14:13.598085 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-30 05:14:13.598092 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 05:14:13.598098 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-30 05:14:13.598105 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 05:14:13.598112 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 05:14:13.598118 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 05:14:13.598125 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-30 05:14:13.598131 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 05:14:13.598138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-30 05:14:13.598144 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 05:14:13.598171 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 05:14:13.598179 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 05:14:13.598195 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-30 05:14:13.598203 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 05:14:13.598211 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-30 05:14:13.598218 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 05:14:13.598226 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 05:14:13.598234 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-30 05:14:13.598242 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 05:14:13.598250 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 05:14:13.598257 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-30 05:14:13.598265 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 05:14:13.598272 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 05:14:13.598294 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-30 05:14:13.598302 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 05:14:13.598309 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 05:14:13.598317 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-30 05:14:13.598324 | orchestrator | 2026-01-30 05:14:13.598332 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-30 05:14:13.598339 | orchestrator | Friday 30 January 2026 05:14:09 +0000 (0:00:08.817) 0:04:36.694 ******** 2026-01-30 05:14:13.598347 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:14:13.598356 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:14:13.598363 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:14:13.598371 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:14:13.598378 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:14:13.598386 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:14:13.598393 | orchestrator | 2026-01-30 05:14:13.598401 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-30 05:14:13.598414 | orchestrator | Friday 30 January 2026 05:14:11 +0000 (0:00:01.774) 0:04:38.469 ******** 2026-01-30 05:14:13.598422 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:14:13.598430 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:14:13.598438 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:14:13.598445 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:14:13.598453 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:14:13.598461 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:14:13.598468 | orchestrator | 2026-01-30 05:14:13.598477 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:14:13.598485 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 05:14:13.598496 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 05:14:13.598505 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 05:14:13.598513 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-30 05:14:13.598521 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 05:14:13.598534 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 05:14:13.598542 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-30 05:14:13.598549 | orchestrator | 2026-01-30 05:14:13.598557 | orchestrator | 2026-01-30 05:14:13.598564 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:14:13.598571 | orchestrator | Friday 30 January 2026 05:14:13 +0000 (0:00:02.502) 0:04:40.972 ******** 2026-01-30 05:14:13.598578 | orchestrator | =============================================================================== 2026-01-30 05:14:13.598585 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.11s 2026-01-30 05:14:13.598592 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.48s 2026-01-30 05:14:13.598600 | orchestrator | Manage labels ----------------------------------------------------------- 8.82s 2026-01-30 05:14:13.598608 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.31s 2026-01-30 05:14:13.598615 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.45s 2026-01-30 05:14:13.598622 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.34s 2026-01-30 05:14:13.598629 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.73s 2026-01-30 05:14:13.598636 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.58s 2026-01-30 05:14:13.598643 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.54s 2026-01-30 05:14:13.598651 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.89s 2026-01-30 05:14:13.598658 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.84s 2026-01-30 05:14:13.598665 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.84s 2026-01-30 05:14:13.598672 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.75s 2026-01-30 05:14:13.598679 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.63s 2026-01-30 05:14:13.598686 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.63s 2026-01-30 05:14:13.598694 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.63s 2026-01-30 05:14:13.598701 | orchestrator | kubectl : Install required packages ------------------------------------- 2.60s 2026-01-30 05:14:13.598708 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.59s 2026-01-30 05:14:13.598720 | orchestrator | Manage taints ----------------------------------------------------------- 2.50s 2026-01-30 05:14:14.011593 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 2.47s 2026-01-30 05:14:14.284986 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-30 05:14:14.285111 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-01-30 05:14:14.292841 | orchestrator | + set -e 2026-01-30 05:14:14.292919 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 05:14:14.292931 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 05:14:14.292940 | orchestrator | ++ INTERACTIVE=false 2026-01-30 05:14:14.292951 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 05:14:14.292965 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 05:14:14.292978 | orchestrator | + osism apply openstackclient 2026-01-30 05:14:26.454330 | orchestrator | 2026-01-30 05:14:26 | INFO  | Task 2859efbb-42e1-4de0-8642-4dce46fde365 (openstackclient) was prepared for execution. 2026-01-30 05:14:26.454428 | orchestrator | 2026-01-30 05:14:26 | INFO  | It takes a moment until task 2859efbb-42e1-4de0-8642-4dce46fde365 (openstackclient) has been started and output is visible here. 2026-01-30 05:14:51.513924 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:14:51.514141 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:14:51.514177 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:14:51.514187 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:14:51.514207 | orchestrator | 2026-01-30 05:14:51.514217 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-30 05:14:51.514227 | orchestrator | 2026-01-30 05:14:51.514233 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-30 05:14:51.514239 | orchestrator | Friday 30 January 2026 05:14:32 +0000 (0:00:01.483) 0:00:01.483 ******** 2026-01-30 05:14:51.514245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-30 05:14:51.514252 | orchestrator | 2026-01-30 05:14:51.514258 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-30 05:14:51.514263 | orchestrator | Friday 30 January 2026 05:14:32 +0000 (0:00:00.849) 0:00:02.332 ******** 2026-01-30 05:14:51.514269 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-30 05:14:51.514274 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-30 05:14:51.514280 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-30 05:14:51.514286 | orchestrator | 2026-01-30 05:14:51.514291 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-30 05:14:51.514296 | orchestrator | Friday 30 January 2026 05:14:34 +0000 (0:00:01.320) 0:00:03.653 ******** 2026-01-30 05:14:51.514302 | orchestrator | changed: [testbed-manager] 2026-01-30 05:14:51.514308 | orchestrator | 2026-01-30 05:14:51.514313 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-30 05:14:51.514318 | orchestrator | Friday 30 January 2026 05:14:35 +0000 (0:00:01.139) 0:00:04.792 ******** 2026-01-30 05:14:51.514325 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:51.514332 | orchestrator | 2026-01-30 05:14:51.514337 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-30 05:14:51.514342 | orchestrator | Friday 30 January 2026 05:14:36 +0000 (0:00:01.012) 0:00:05.805 ******** 2026-01-30 05:14:51.514348 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:51.514353 | orchestrator | 2026-01-30 05:14:51.514358 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-30 05:14:51.514364 | orchestrator | Friday 30 January 2026 05:14:37 +0000 (0:00:00.908) 0:00:06.713 ******** 2026-01-30 05:14:51.514369 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-30 05:14:51.514375 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-30 05:14:51.514386 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:51.514391 | orchestrator | 2026-01-30 05:14:51.514396 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-30 05:14:51.514402 | orchestrator | Friday 30 January 2026 05:14:38 +0000 (0:00:00.698) 0:00:07.412 ******** 2026-01-30 05:14:51.514408 | orchestrator | changed: [testbed-manager] 2026-01-30 05:14:51.514413 | orchestrator | 2026-01-30 05:14:51.514418 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-30 05:14:51.514424 | orchestrator | Friday 30 January 2026 05:14:47 +0000 (0:00:09.938) 0:00:17.351 ******** 2026-01-30 05:14:51.514429 | orchestrator | changed: [testbed-manager] 2026-01-30 05:14:51.514459 | orchestrator | 2026-01-30 05:14:51.514469 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-30 05:14:51.514477 | orchestrator | Friday 30 January 2026 05:14:49 +0000 (0:00:01.413) 0:00:18.764 ******** 2026-01-30 05:14:51.514487 | orchestrator | changed: [testbed-manager] 2026-01-30 05:14:51.514496 | orchestrator | 2026-01-30 05:14:51.514505 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-30 05:14:51.514514 | orchestrator | Friday 30 January 2026 05:14:50 +0000 (0:00:00.628) 0:00:19.392 ******** 2026-01-30 05:14:51.514523 | orchestrator | ok: [testbed-manager] 2026-01-30 05:14:51.514533 | orchestrator | 2026-01-30 05:14:51.514543 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:14:51.514553 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-30 05:14:51.514563 | orchestrator | 2026-01-30 05:14:51.514571 | orchestrator | 2026-01-30 05:14:51.514578 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:14:51.514584 | orchestrator | Friday 30 January 2026 05:14:51 +0000 (0:00:01.133) 0:00:20.526 ******** 2026-01-30 05:14:51.514590 | orchestrator | =============================================================================== 2026-01-30 05:14:51.514597 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 9.94s 2026-01-30 05:14:51.514603 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.41s 2026-01-30 05:14:51.514609 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.32s 2026-01-30 05:14:51.514615 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.14s 2026-01-30 05:14:51.514621 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.13s 2026-01-30 05:14:51.514627 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.01s 2026-01-30 05:14:51.514651 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.91s 2026-01-30 05:14:51.514658 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.85s 2026-01-30 05:14:51.514664 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.70s 2026-01-30 05:14:51.514671 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.63s 2026-01-30 05:14:51.796415 | orchestrator | + osism apply -a upgrade common 2026-01-30 05:14:53.853174 | orchestrator | 2026-01-30 05:14:53 | INFO  | Task 96153096-7f9a-43a8-99ff-06470d68f6c7 (common) was prepared for execution. 2026-01-30 05:14:53.853285 | orchestrator | 2026-01-30 05:14:53 | INFO  | It takes a moment until task 96153096-7f9a-43a8-99ff-06470d68f6c7 (common) has been started and output is visible here. 2026-01-30 05:15:12.358961 | orchestrator | 2026-01-30 05:15:12.359042 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-30 05:15:12.359101 | orchestrator | 2026-01-30 05:15:12.359107 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 05:15:12.359111 | orchestrator | Friday 30 January 2026 05:15:00 +0000 (0:00:02.249) 0:00:02.249 ******** 2026-01-30 05:15:12.359116 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:15:12.359121 | orchestrator | 2026-01-30 05:15:12.359125 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-30 05:15:12.359129 | orchestrator | Friday 30 January 2026 05:15:03 +0000 (0:00:03.299) 0:00:05.549 ******** 2026-01-30 05:15:12.359134 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359138 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359142 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359146 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359168 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359172 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359175 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359179 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359183 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:15:12.359187 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359190 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359194 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359198 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359202 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359205 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359209 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:15:12.359213 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359217 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359220 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359224 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359228 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:15:12.359231 | orchestrator | 2026-01-30 05:15:12.359235 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 05:15:12.359239 | orchestrator | Friday 30 January 2026 05:15:07 +0000 (0:00:03.652) 0:00:09.202 ******** 2026-01-30 05:15:12.359243 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:15:12.359248 | orchestrator | 2026-01-30 05:15:12.359252 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-30 05:15:12.359256 | orchestrator | Friday 30 January 2026 05:15:09 +0000 (0:00:02.661) 0:00:11.864 ******** 2026-01-30 05:15:12.359263 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359279 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359309 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359313 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359317 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359420 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:12.359425 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:12.359429 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:12.359442 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.129908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.129994 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130005 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130085 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130092 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130131 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130135 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130143 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130147 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:15.130151 | orchestrator | 2026-01-30 05:15:15.130180 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-30 05:15:15.130185 | orchestrator | Friday 30 January 2026 05:15:14 +0000 (0:00:04.446) 0:00:16.310 ******** 2026-01-30 05:15:15.130192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:15.130197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:15.130201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:15.130218 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:15.130230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208576 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:17.208730 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:15:17.208754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:17.208846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208889 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:15:17.208901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:17.208944 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:15:17.208956 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:15:17.208972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.208995 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:15:17.209006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:17.209020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.209099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:17.209124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:17.209138 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:15:17.209164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.462792 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.462901 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:15:20.462920 | orchestrator | 2026-01-30 05:15:20.462933 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-30 05:15:20.462946 | orchestrator | Friday 30 January 2026 05:15:17 +0000 (0:00:02.829) 0:00:19.140 ******** 2026-01-30 05:15:20.462975 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:20.462990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:20.463002 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:20.463110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463162 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:15:20.463173 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:15:20.463185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:20.463208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:20.463240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463252 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:15:20.463263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:20.463275 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:15:20.463303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597301 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:15:32.597316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:32.597327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:15:32.597349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597370 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:15:32.597377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:32.597384 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:15:32.597390 | orchestrator | 2026-01-30 05:15:32.597398 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-30 05:15:32.597405 | orchestrator | Friday 30 January 2026 05:15:20 +0000 (0:00:03.250) 0:00:22.391 ******** 2026-01-30 05:15:32.597411 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:15:32.597417 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:15:32.597423 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:15:32.597430 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:15:32.597448 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:15:32.597455 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:15:32.597461 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:15:32.597467 | orchestrator | 2026-01-30 05:15:32.597474 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-30 05:15:32.597480 | orchestrator | Friday 30 January 2026 05:15:22 +0000 (0:00:02.172) 0:00:24.563 ******** 2026-01-30 05:15:32.597486 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:15:32.597492 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:15:32.597498 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:15:32.597517 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:15:32.597523 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:15:32.597533 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:15:32.597540 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:15:32.597552 | orchestrator | 2026-01-30 05:15:32.597558 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-30 05:15:32.597565 | orchestrator | Friday 30 January 2026 05:15:24 +0000 (0:00:02.034) 0:00:26.598 ******** 2026-01-30 05:15:32.597571 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:15:32.597577 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:15:32.597583 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:15:32.597589 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:15:32.597595 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:15:32.597601 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:15:32.597607 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:15:32.597613 | orchestrator | 2026-01-30 05:15:32.597620 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-30 05:15:32.597626 | orchestrator | Friday 30 January 2026 05:15:26 +0000 (0:00:02.004) 0:00:28.603 ******** 2026-01-30 05:15:32.597632 | orchestrator | changed: [testbed-manager] 2026-01-30 05:15:32.597638 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:15:32.597644 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:15:32.597650 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:15:32.597656 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:15:32.597662 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:15:32.597668 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:15:32.597674 | orchestrator | 2026-01-30 05:15:32.597681 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-30 05:15:32.597687 | orchestrator | Friday 30 January 2026 05:15:29 +0000 (0:00:03.046) 0:00:31.650 ******** 2026-01-30 05:15:32.597693 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:32.597701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:32.597707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:32.597714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:32.597726 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:34.570328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:34.570343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:34.570355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570378 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570540 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:34.570558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:53.529218 | orchestrator | 2026-01-30 05:15:53.529336 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-30 05:15:53.529347 | orchestrator | Friday 30 January 2026 05:15:34 +0000 (0:00:04.849) 0:00:36.499 ******** 2026-01-30 05:15:53.529354 | orchestrator | [WARNING]: Skipped 2026-01-30 05:15:53.529361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-30 05:15:53.529369 | orchestrator | to this access issue: 2026-01-30 05:15:53.529375 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-30 05:15:53.529382 | orchestrator | directory 2026-01-30 05:15:53.529388 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:15:53.529396 | orchestrator | 2026-01-30 05:15:53.529402 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-30 05:15:53.529408 | orchestrator | Friday 30 January 2026 05:15:36 +0000 (0:00:02.176) 0:00:38.676 ******** 2026-01-30 05:15:53.529414 | orchestrator | [WARNING]: Skipped 2026-01-30 05:15:53.529420 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-30 05:15:53.529426 | orchestrator | to this access issue: 2026-01-30 05:15:53.529433 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-30 05:15:53.529439 | orchestrator | directory 2026-01-30 05:15:53.529445 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:15:53.529451 | orchestrator | 2026-01-30 05:15:53.529458 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-30 05:15:53.529464 | orchestrator | Friday 30 January 2026 05:15:38 +0000 (0:00:01.800) 0:00:40.476 ******** 2026-01-30 05:15:53.529470 | orchestrator | [WARNING]: Skipped 2026-01-30 05:15:53.529476 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-30 05:15:53.529482 | orchestrator | to this access issue: 2026-01-30 05:15:53.529488 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-30 05:15:53.529495 | orchestrator | directory 2026-01-30 05:15:53.529501 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:15:53.529507 | orchestrator | 2026-01-30 05:15:53.529513 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-30 05:15:53.529519 | orchestrator | Friday 30 January 2026 05:15:40 +0000 (0:00:01.801) 0:00:42.277 ******** 2026-01-30 05:15:53.529525 | orchestrator | [WARNING]: Skipped 2026-01-30 05:15:53.529531 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-30 05:15:53.529537 | orchestrator | to this access issue: 2026-01-30 05:15:53.529543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-30 05:15:53.529549 | orchestrator | directory 2026-01-30 05:15:53.529556 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:15:53.529579 | orchestrator | 2026-01-30 05:15:53.529586 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-30 05:15:53.529592 | orchestrator | Friday 30 January 2026 05:15:42 +0000 (0:00:01.736) 0:00:44.014 ******** 2026-01-30 05:15:53.529599 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:15:53.529610 | orchestrator | changed: [testbed-manager] 2026-01-30 05:15:53.529620 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:15:53.529629 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:15:53.529639 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:15:53.529649 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:15:53.529659 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:15:53.529669 | orchestrator | 2026-01-30 05:15:53.529678 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-30 05:15:53.529688 | orchestrator | Friday 30 January 2026 05:15:45 +0000 (0:00:03.827) 0:00:47.842 ******** 2026-01-30 05:15:53.529698 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529711 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529721 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529731 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529741 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529753 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529763 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:15:53.529771 | orchestrator | 2026-01-30 05:15:53.529779 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-30 05:15:53.529786 | orchestrator | Friday 30 January 2026 05:15:48 +0000 (0:00:03.056) 0:00:50.899 ******** 2026-01-30 05:15:53.529793 | orchestrator | ok: [testbed-manager] 2026-01-30 05:15:53.529800 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:15:53.529807 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:15:53.529814 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:15:53.529821 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:15:53.529828 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:15:53.529835 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:15:53.529841 | orchestrator | 2026-01-30 05:15:53.529848 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-30 05:15:53.529855 | orchestrator | Friday 30 January 2026 05:15:51 +0000 (0:00:02.909) 0:00:53.808 ******** 2026-01-30 05:15:53.529884 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:53.529895 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:53.529905 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:15:53.529921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:53.529929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:53.529936 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:15:53.529944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:15:53.529955 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:01.107493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:01.107628 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:01.107672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:01.107686 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107698 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:01.107709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:01.107723 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:01.107788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:01.107806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107817 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107827 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107837 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:01.107848 | orchestrator | 2026-01-30 05:16:01.107859 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-30 05:16:01.107870 | orchestrator | Friday 30 January 2026 05:15:54 +0000 (0:00:02.858) 0:00:56.667 ******** 2026-01-30 05:16:01.107879 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107890 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107900 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107910 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107919 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107929 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107938 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:16:01.107948 | orchestrator | 2026-01-30 05:16:01.107958 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-30 05:16:01.107967 | orchestrator | Friday 30 January 2026 05:15:57 +0000 (0:00:02.976) 0:00:59.644 ******** 2026-01-30 05:16:01.108012 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:01.108025 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:01.108037 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:01.108048 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:01.108066 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:01.108086 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:03.576660 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:16:03.576753 | orchestrator | 2026-01-30 05:16:03.576767 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-30 05:16:03.576778 | orchestrator | Friday 30 January 2026 05:16:01 +0000 (0:00:03.403) 0:01:03.047 ******** 2026-01-30 05:16:03.576808 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576860 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.576887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576918 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:03.576929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.576939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.576953 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.576965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.577051 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:03.577077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:08.361717 | orchestrator | 2026-01-30 05:16:08.361731 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-30 05:16:08.361743 | orchestrator | Friday 30 January 2026 05:16:05 +0000 (0:00:04.494) 0:01:07.542 ******** 2026-01-30 05:16:08.361755 | orchestrator | changed: [testbed-manager] => { 2026-01-30 05:16:08.361766 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361777 | orchestrator | } 2026-01-30 05:16:08.361787 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:16:08.361798 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361809 | orchestrator | } 2026-01-30 05:16:08.361819 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:16:08.361830 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361840 | orchestrator | } 2026-01-30 05:16:08.361850 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:16:08.361860 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361871 | orchestrator | } 2026-01-30 05:16:08.361881 | orchestrator | changed: [testbed-node-3] => { 2026-01-30 05:16:08.361891 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361902 | orchestrator | } 2026-01-30 05:16:08.361912 | orchestrator | changed: [testbed-node-4] => { 2026-01-30 05:16:08.361923 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361934 | orchestrator | } 2026-01-30 05:16:08.361944 | orchestrator | changed: [testbed-node-5] => { 2026-01-30 05:16:08.361955 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:16:08.361965 | orchestrator | } 2026-01-30 05:16:08.361995 | orchestrator | 2026-01-30 05:16:08.362081 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:16:08.362094 | orchestrator | Friday 30 January 2026 05:16:07 +0000 (0:00:02.129) 0:01:09.671 ******** 2026-01-30 05:16:08.362103 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:08.362113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:08.362120 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:08.362128 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:08.362135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:08.362149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:08.362156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:08.362162 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:08.362169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:08.362186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716356 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:14.716368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:14.716376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716400 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:14.716404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:14.716409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716417 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:14.716446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:14.716452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:14.716464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716472 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:14.716476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:14.716480 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:14.716484 | orchestrator | 2026-01-30 05:16:14.716489 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716494 | orchestrator | Friday 30 January 2026 05:16:11 +0000 (0:00:03.289) 0:01:12.961 ******** 2026-01-30 05:16:14.716498 | orchestrator | 2026-01-30 05:16:14.716502 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716505 | orchestrator | Friday 30 January 2026 05:16:11 +0000 (0:00:00.438) 0:01:13.399 ******** 2026-01-30 05:16:14.716509 | orchestrator | 2026-01-30 05:16:14.716513 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716516 | orchestrator | Friday 30 January 2026 05:16:11 +0000 (0:00:00.463) 0:01:13.863 ******** 2026-01-30 05:16:14.716520 | orchestrator | 2026-01-30 05:16:14.716524 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716531 | orchestrator | Friday 30 January 2026 05:16:12 +0000 (0:00:00.438) 0:01:14.301 ******** 2026-01-30 05:16:14.716535 | orchestrator | 2026-01-30 05:16:14.716538 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716542 | orchestrator | Friday 30 January 2026 05:16:13 +0000 (0:00:00.663) 0:01:14.965 ******** 2026-01-30 05:16:14.716546 | orchestrator | 2026-01-30 05:16:14.716550 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716553 | orchestrator | Friday 30 January 2026 05:16:13 +0000 (0:00:00.436) 0:01:15.401 ******** 2026-01-30 05:16:14.716557 | orchestrator | 2026-01-30 05:16:14.716561 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:16:14.716565 | orchestrator | Friday 30 January 2026 05:16:13 +0000 (0:00:00.448) 0:01:15.849 ******** 2026-01-30 05:16:14.716568 | orchestrator | 2026-01-30 05:16:14.716575 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-30 05:16:17.400157 | orchestrator | Friday 30 January 2026 05:16:14 +0000 (0:00:00.782) 0:01:16.632 ******** 2026-01-30 05:16:17.400251 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_c26xgfww/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_c26xgfww/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_c26xgfww/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:17.400313 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8spqovhi/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8spqovhi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8spqovhi/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:17.400330 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_hlm9mpxd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_hlm9mpxd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_hlm9mpxd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:17.400348 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dxckcy3o/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dxckcy3o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dxckcy3o/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:20.806767 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_e63c9bl3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_e63c9bl3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_e63c9bl3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:20.806917 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_oj4v84vj/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_oj4v84vj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_oj4v84vj/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:20.807010 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gc26vqgc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gc26vqgc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gc26vqgc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-01-30 05:16:20.807027 | orchestrator | 2026-01-30 05:16:20.807040 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:16:20.807054 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807115 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807128 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807139 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807150 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807161 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807172 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-01-30 05:16:20.807182 | orchestrator | 2026-01-30 05:16:20.807193 | orchestrator | 2026-01-30 05:16:20.807214 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:16:21.392613 | orchestrator | 2026-01-30 05:16:21 | INFO  | Task 704eb64d-efe6-4ae1-bfe6-3008625e2b30 (common) was prepared for execution. 2026-01-30 05:16:21.392737 | orchestrator | 2026-01-30 05:16:21 | INFO  | It takes a moment until task 704eb64d-efe6-4ae1-bfe6-3008625e2b30 (common) has been started and output is visible here. 2026-01-30 05:16:38.683085 | orchestrator | Friday 30 January 2026 05:16:20 +0000 (0:00:06.113) 0:01:22.745 ******** 2026-01-30 05:16:38.683174 | orchestrator | =============================================================================== 2026-01-30 05:16:38.683184 | orchestrator | common : Restart fluentd container -------------------------------------- 6.11s 2026-01-30 05:16:38.683192 | orchestrator | common : Copying over config.json files for services -------------------- 4.85s 2026-01-30 05:16:38.683199 | orchestrator | service-check-containers : common | Check containers -------------------- 4.49s 2026-01-30 05:16:38.683207 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.45s 2026-01-30 05:16:38.683213 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.83s 2026-01-30 05:16:38.683220 | orchestrator | common : Flush handlers ------------------------------------------------- 3.67s 2026-01-30 05:16:38.683226 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.65s 2026-01-30 05:16:38.683251 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.40s 2026-01-30 05:16:38.683258 | orchestrator | common : include_tasks -------------------------------------------------- 3.30s 2026-01-30 05:16:38.683265 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.29s 2026-01-30 05:16:38.683272 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.25s 2026-01-30 05:16:38.683278 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.06s 2026-01-30 05:16:38.683285 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.05s 2026-01-30 05:16:38.683292 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.98s 2026-01-30 05:16:38.683298 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.91s 2026-01-30 05:16:38.683305 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.86s 2026-01-30 05:16:38.683311 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.83s 2026-01-30 05:16:38.683319 | orchestrator | common : include_tasks -------------------------------------------------- 2.66s 2026-01-30 05:16:38.683325 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.18s 2026-01-30 05:16:38.683329 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.17s 2026-01-30 05:16:38.683346 | orchestrator | 2026-01-30 05:16:38.683351 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-30 05:16:38.683355 | orchestrator | 2026-01-30 05:16:38.683359 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 05:16:38.683363 | orchestrator | Friday 30 January 2026 05:16:27 +0000 (0:00:02.191) 0:00:02.191 ******** 2026-01-30 05:16:38.683370 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:16:38.683375 | orchestrator | 2026-01-30 05:16:38.683379 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-30 05:16:38.683383 | orchestrator | Friday 30 January 2026 05:16:30 +0000 (0:00:03.196) 0:00:05.388 ******** 2026-01-30 05:16:38.683388 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683392 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683395 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683399 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683403 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683407 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683411 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683415 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-30 05:16:38.683419 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683422 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683426 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683430 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683433 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683437 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683441 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-30 05:16:38.683445 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683448 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683452 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683456 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683459 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683474 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-30 05:16:38.683478 | orchestrator | 2026-01-30 05:16:38.683482 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-30 05:16:38.683486 | orchestrator | Friday 30 January 2026 05:16:33 +0000 (0:00:03.101) 0:00:08.489 ******** 2026-01-30 05:16:38.683489 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:16:38.683494 | orchestrator | 2026-01-30 05:16:38.683498 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-30 05:16:38.683502 | orchestrator | Friday 30 January 2026 05:16:36 +0000 (0:00:02.567) 0:00:11.057 ******** 2026-01-30 05:16:38.683507 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683518 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683525 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683529 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683533 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:38.683548 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:41.515475 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515597 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515613 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515637 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515647 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515666 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515698 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515716 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515729 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515754 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515765 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515775 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515785 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:16:41.515795 | orchestrator | 2026-01-30 05:16:41.515807 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-30 05:16:41.515818 | orchestrator | Friday 30 January 2026 05:16:40 +0000 (0:00:04.493) 0:00:15.551 ******** 2026-01-30 05:16:41.515829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:41.515853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:43.741631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.741773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.741818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742667 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742718 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:43.742741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:43.742759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742802 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:43.742818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:43.742882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:43.742897 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:43.742915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.742991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.743007 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:43.743022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:43.743048 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:43.743063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:43.743092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:46.963215 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963233 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:46.963249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963268 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:46.963276 | orchestrator | 2026-01-30 05:16:46.963285 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-30 05:16:46.963310 | orchestrator | Friday 30 January 2026 05:16:43 +0000 (0:00:02.919) 0:00:18.471 ******** 2026-01-30 05:16:46.963319 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:46.963327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:46.963336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:46.963369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963378 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963402 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:46.963411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:46.963419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:46.963437 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:46.963445 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:46.963460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:58.755631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:58.755644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755677 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:58.755688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:16:58.755705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755722 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:58.755744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755755 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:58.755764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:16:58.755774 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:58.755779 | orchestrator | 2026-01-30 05:16:58.755785 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-01-30 05:16:58.755791 | orchestrator | Friday 30 January 2026 05:16:46 +0000 (0:00:03.227) 0:00:21.699 ******** 2026-01-30 05:16:58.755796 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:58.755801 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:58.755805 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:58.755810 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:58.755815 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:58.755819 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:58.755824 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:58.755829 | orchestrator | 2026-01-30 05:16:58.755834 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-30 05:16:58.755839 | orchestrator | Friday 30 January 2026 05:16:49 +0000 (0:00:02.145) 0:00:23.844 ******** 2026-01-30 05:16:58.755844 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:58.755848 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:58.755853 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:58.755858 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:58.755862 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:58.755867 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:58.755872 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:58.755876 | orchestrator | 2026-01-30 05:16:58.755881 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-30 05:16:58.755886 | orchestrator | Friday 30 January 2026 05:16:51 +0000 (0:00:02.033) 0:00:25.878 ******** 2026-01-30 05:16:58.755891 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:16:58.755896 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:16:58.755900 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:16:58.755905 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:16:58.755910 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:16:58.755915 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:16:58.755919 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:16:58.755924 | orchestrator | 2026-01-30 05:16:58.755974 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-01-30 05:16:58.755980 | orchestrator | Friday 30 January 2026 05:16:52 +0000 (0:00:01.869) 0:00:27.747 ******** 2026-01-30 05:16:58.755985 | orchestrator | ok: [testbed-manager] 2026-01-30 05:16:58.755991 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:16:58.755995 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:16:58.756000 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:16:58.756005 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:16:58.756010 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:16:58.756014 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:16:58.756019 | orchestrator | 2026-01-30 05:16:58.756024 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-30 05:16:58.756029 | orchestrator | Friday 30 January 2026 05:16:55 +0000 (0:00:02.947) 0:00:30.695 ******** 2026-01-30 05:16:58.756034 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:16:58.756045 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660021 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660130 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660147 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660162 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660175 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:00.660201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660260 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660281 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660294 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660307 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660320 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660333 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660354 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:00.660376 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.070342 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.070437 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.070446 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.070454 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.070461 | orchestrator | 2026-01-30 05:17:20.070469 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-30 05:17:20.070476 | orchestrator | Friday 30 January 2026 05:17:00 +0000 (0:00:04.696) 0:00:35.392 ******** 2026-01-30 05:17:20.070483 | orchestrator | [WARNING]: Skipped 2026-01-30 05:17:20.070490 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-30 05:17:20.070497 | orchestrator | to this access issue: 2026-01-30 05:17:20.070504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-30 05:17:20.070510 | orchestrator | directory 2026-01-30 05:17:20.070516 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:17:20.070523 | orchestrator | 2026-01-30 05:17:20.070530 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-30 05:17:20.070536 | orchestrator | Friday 30 January 2026 05:17:02 +0000 (0:00:02.315) 0:00:37.708 ******** 2026-01-30 05:17:20.070542 | orchestrator | [WARNING]: Skipped 2026-01-30 05:17:20.070548 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-30 05:17:20.070572 | orchestrator | to this access issue: 2026-01-30 05:17:20.070579 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-30 05:17:20.070591 | orchestrator | directory 2026-01-30 05:17:20.070601 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:17:20.070612 | orchestrator | 2026-01-30 05:17:20.070623 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-30 05:17:20.070633 | orchestrator | Friday 30 January 2026 05:17:04 +0000 (0:00:01.828) 0:00:39.536 ******** 2026-01-30 05:17:20.070644 | orchestrator | [WARNING]: Skipped 2026-01-30 05:17:20.070656 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-30 05:17:20.070667 | orchestrator | to this access issue: 2026-01-30 05:17:20.070679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-30 05:17:20.070688 | orchestrator | directory 2026-01-30 05:17:20.070694 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:17:20.070700 | orchestrator | 2026-01-30 05:17:20.070706 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-30 05:17:20.070712 | orchestrator | Friday 30 January 2026 05:17:06 +0000 (0:00:01.845) 0:00:41.382 ******** 2026-01-30 05:17:20.070718 | orchestrator | [WARNING]: Skipped 2026-01-30 05:17:20.070725 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-30 05:17:20.070731 | orchestrator | to this access issue: 2026-01-30 05:17:20.070740 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-30 05:17:20.070750 | orchestrator | directory 2026-01-30 05:17:20.070759 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-30 05:17:20.070769 | orchestrator | 2026-01-30 05:17:20.070779 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-30 05:17:20.070790 | orchestrator | Friday 30 January 2026 05:17:08 +0000 (0:00:01.861) 0:00:43.244 ******** 2026-01-30 05:17:20.070801 | orchestrator | ok: [testbed-manager] 2026-01-30 05:17:20.070810 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:17:20.070816 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:17:20.070822 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:17:20.070828 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:17:20.070834 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:17:20.070840 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:17:20.070847 | orchestrator | 2026-01-30 05:17:20.070865 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-30 05:17:20.070872 | orchestrator | Friday 30 January 2026 05:17:12 +0000 (0:00:04.136) 0:00:47.380 ******** 2026-01-30 05:17:20.070893 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070901 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070909 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070948 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070956 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070964 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070971 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-30 05:17:20.070979 | orchestrator | 2026-01-30 05:17:20.070986 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-30 05:17:20.070993 | orchestrator | Friday 30 January 2026 05:17:16 +0000 (0:00:03.550) 0:00:50.931 ******** 2026-01-30 05:17:20.071000 | orchestrator | ok: [testbed-manager] 2026-01-30 05:17:20.071007 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:17:20.071014 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:17:20.071024 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:17:20.071043 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:17:20.071053 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:17:20.071064 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:17:20.071074 | orchestrator | 2026-01-30 05:17:20.071085 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-30 05:17:20.071096 | orchestrator | Friday 30 January 2026 05:17:19 +0000 (0:00:02.923) 0:00:53.855 ******** 2026-01-30 05:17:20.071108 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.071123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.071131 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.071139 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.071153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974073 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.974161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974188 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.974195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974202 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.974210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974231 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.974259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.974270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974283 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.974291 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.974297 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:20.974304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:20.974311 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.974317 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:20.974329 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:30.546877 | orchestrator | 2026-01-30 05:17:30.547024 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-30 05:17:30.547091 | orchestrator | Friday 30 January 2026 05:17:22 +0000 (0:00:02.902) 0:00:56.758 ******** 2026-01-30 05:17:30.547105 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547116 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547127 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547137 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547147 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547157 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547167 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-30 05:17:30.547178 | orchestrator | 2026-01-30 05:17:30.547189 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-30 05:17:30.547200 | orchestrator | Friday 30 January 2026 05:17:25 +0000 (0:00:03.036) 0:00:59.794 ******** 2026-01-30 05:17:30.547210 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547220 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547227 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547233 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547239 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547245 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547251 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-30 05:17:30.547257 | orchestrator | 2026-01-30 05:17:30.547263 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-01-30 05:17:30.547270 | orchestrator | Friday 30 January 2026 05:17:28 +0000 (0:00:03.124) 0:01:02.919 ******** 2026-01-30 05:17:30.547278 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547294 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547351 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:30.547358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-30 05:17:30.547371 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:30.547380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:30.547397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:30.547416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967269 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967382 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967454 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:17:34.967534 | orchestrator | 2026-01-30 05:17:34.967546 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-01-30 05:17:34.967557 | orchestrator | Friday 30 January 2026 05:17:32 +0000 (0:00:04.437) 0:01:07.357 ******** 2026-01-30 05:17:34.967567 | orchestrator | changed: [testbed-manager] => { 2026-01-30 05:17:34.967578 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967587 | orchestrator | } 2026-01-30 05:17:34.967597 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:17:34.967606 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967616 | orchestrator | } 2026-01-30 05:17:34.967625 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:17:34.967635 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967644 | orchestrator | } 2026-01-30 05:17:34.967653 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:17:34.967663 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967672 | orchestrator | } 2026-01-30 05:17:34.967682 | orchestrator | changed: [testbed-node-3] => { 2026-01-30 05:17:34.967691 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967700 | orchestrator | } 2026-01-30 05:17:34.967710 | orchestrator | changed: [testbed-node-4] => { 2026-01-30 05:17:34.967719 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967728 | orchestrator | } 2026-01-30 05:17:34.967738 | orchestrator | changed: [testbed-node-5] => { 2026-01-30 05:17:34.967747 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:17:34.967756 | orchestrator | } 2026-01-30 05:17:34.967766 | orchestrator | 2026-01-30 05:17:34.967775 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:17:34.967785 | orchestrator | Friday 30 January 2026 05:17:34 +0000 (0:00:01.993) 0:01:09.350 ******** 2026-01-30 05:17:34.967796 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:34.967816 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:34.967828 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:34.967840 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:17:34.967852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:34.967871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137575 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:17:41.137605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:41.137627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137671 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:17:41.137696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:41.137706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137729 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:17:41.137755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:41.137765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137800 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:17:41.137810 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:41.137819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:17:41.137837 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:17:41.137846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-30 05:17:41.137866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:19:07.611415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:19:07.611509 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:19:07.611538 | orchestrator | 2026-01-30 05:19:07.611546 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611554 | orchestrator | Friday 30 January 2026 05:17:37 +0000 (0:00:02.820) 0:01:12.170 ******** 2026-01-30 05:19:07.611560 | orchestrator | 2026-01-30 05:19:07.611566 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611572 | orchestrator | Friday 30 January 2026 05:17:37 +0000 (0:00:00.461) 0:01:12.632 ******** 2026-01-30 05:19:07.611579 | orchestrator | 2026-01-30 05:19:07.611585 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611591 | orchestrator | Friday 30 January 2026 05:17:38 +0000 (0:00:00.462) 0:01:13.095 ******** 2026-01-30 05:19:07.611597 | orchestrator | 2026-01-30 05:19:07.611603 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611609 | orchestrator | Friday 30 January 2026 05:17:38 +0000 (0:00:00.440) 0:01:13.535 ******** 2026-01-30 05:19:07.611616 | orchestrator | 2026-01-30 05:19:07.611622 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611628 | orchestrator | Friday 30 January 2026 05:17:39 +0000 (0:00:00.638) 0:01:14.174 ******** 2026-01-30 05:19:07.611634 | orchestrator | 2026-01-30 05:19:07.611640 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611646 | orchestrator | Friday 30 January 2026 05:17:39 +0000 (0:00:00.439) 0:01:14.614 ******** 2026-01-30 05:19:07.611652 | orchestrator | 2026-01-30 05:19:07.611658 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-30 05:19:07.611664 | orchestrator | Friday 30 January 2026 05:17:40 +0000 (0:00:00.436) 0:01:15.051 ******** 2026-01-30 05:19:07.611670 | orchestrator | 2026-01-30 05:19:07.611676 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-30 05:19:07.611682 | orchestrator | Friday 30 January 2026 05:17:41 +0000 (0:00:00.817) 0:01:15.868 ******** 2026-01-30 05:19:07.611688 | orchestrator | changed: [testbed-manager] 2026-01-30 05:19:07.611694 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:19:07.611700 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:19:07.611706 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:19:07.611712 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:19:07.611718 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:19:07.611725 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:19:07.611731 | orchestrator | 2026-01-30 05:19:07.611756 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-30 05:19:07.611762 | orchestrator | Friday 30 January 2026 05:18:17 +0000 (0:00:36.760) 0:01:52.629 ******** 2026-01-30 05:19:07.611768 | orchestrator | changed: [testbed-manager] 2026-01-30 05:19:07.611774 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:19:07.611780 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:19:07.611786 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:19:07.611792 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:19:07.611798 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:19:07.611804 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:19:07.611810 | orchestrator | 2026-01-30 05:19:07.611827 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-30 05:19:07.611834 | orchestrator | Friday 30 January 2026 05:18:51 +0000 (0:00:34.058) 0:02:26.687 ******** 2026-01-30 05:19:07.611840 | orchestrator | ok: [testbed-manager] 2026-01-30 05:19:07.611905 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:07.611911 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:07.611917 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:19:07.611923 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:07.611929 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:19:07.611936 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:19:07.611942 | orchestrator | 2026-01-30 05:19:07.611948 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-30 05:19:07.611955 | orchestrator | Friday 30 January 2026 05:18:55 +0000 (0:00:03.231) 0:02:29.919 ******** 2026-01-30 05:19:07.611969 | orchestrator | changed: [testbed-manager] 2026-01-30 05:19:07.611976 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:19:07.611983 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:19:07.611991 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:19:07.611998 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:19:07.612005 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:19:07.612012 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:19:07.612019 | orchestrator | 2026-01-30 05:19:07.612027 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:19:07.612036 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612056 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612064 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612072 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612093 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612101 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612108 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:19:07.612115 | orchestrator | 2026-01-30 05:19:07.612123 | orchestrator | 2026-01-30 05:19:07.612130 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:19:07.612137 | orchestrator | Friday 30 January 2026 05:19:07 +0000 (0:00:11.927) 0:02:41.846 ******** 2026-01-30 05:19:07.612145 | orchestrator | =============================================================================== 2026-01-30 05:19:07.612153 | orchestrator | common : Restart fluentd container ------------------------------------- 36.76s 2026-01-30 05:19:07.612160 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.06s 2026-01-30 05:19:07.612167 | orchestrator | common : Restart cron container ---------------------------------------- 11.93s 2026-01-30 05:19:07.612173 | orchestrator | common : Copying over config.json files for services -------------------- 4.70s 2026-01-30 05:19:07.612179 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.49s 2026-01-30 05:19:07.612185 | orchestrator | service-check-containers : common | Check containers -------------------- 4.44s 2026-01-30 05:19:07.612191 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.14s 2026-01-30 05:19:07.612197 | orchestrator | common : Flush handlers ------------------------------------------------- 3.70s 2026-01-30 05:19:07.612203 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.55s 2026-01-30 05:19:07.612209 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.23s 2026-01-30 05:19:07.612216 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.23s 2026-01-30 05:19:07.612222 | orchestrator | common : include_tasks -------------------------------------------------- 3.20s 2026-01-30 05:19:07.612228 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.12s 2026-01-30 05:19:07.612234 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.10s 2026-01-30 05:19:07.612240 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.04s 2026-01-30 05:19:07.612246 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.95s 2026-01-30 05:19:07.612258 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.92s 2026-01-30 05:19:07.612264 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.92s 2026-01-30 05:19:07.612270 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.90s 2026-01-30 05:19:07.612276 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.82s 2026-01-30 05:19:07.948814 | orchestrator | + osism apply -a upgrade loadbalancer 2026-01-30 05:19:10.114402 | orchestrator | 2026-01-30 05:19:10 | INFO  | Task 9a11e5b9-8fc3-428e-95c9-af760fc2fdd8 (loadbalancer) was prepared for execution. 2026-01-30 05:19:10.114532 | orchestrator | 2026-01-30 05:19:10 | INFO  | It takes a moment until task 9a11e5b9-8fc3-428e-95c9-af760fc2fdd8 (loadbalancer) has been started and output is visible here. 2026-01-30 05:19:44.478316 | orchestrator | 2026-01-30 05:19:44.478436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:19:44.478453 | orchestrator | 2026-01-30 05:19:44.478472 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:19:44.478495 | orchestrator | Friday 30 January 2026 05:19:15 +0000 (0:00:01.418) 0:00:01.418 ******** 2026-01-30 05:19:44.478524 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:44.478547 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:44.478589 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:44.478625 | orchestrator | 2026-01-30 05:19:44.478644 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:19:44.478661 | orchestrator | Friday 30 January 2026 05:19:16 +0000 (0:00:01.552) 0:00:02.971 ******** 2026-01-30 05:19:44.478680 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-30 05:19:44.478698 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-30 05:19:44.478718 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-30 05:19:44.478737 | orchestrator | 2026-01-30 05:19:44.478753 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-30 05:19:44.478771 | orchestrator | 2026-01-30 05:19:44.478788 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-30 05:19:44.478826 | orchestrator | Friday 30 January 2026 05:19:18 +0000 (0:00:02.049) 0:00:05.020 ******** 2026-01-30 05:19:44.478916 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:19:44.478937 | orchestrator | 2026-01-30 05:19:44.478957 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-01-30 05:19:44.478977 | orchestrator | Friday 30 January 2026 05:19:21 +0000 (0:00:02.471) 0:00:07.492 ******** 2026-01-30 05:19:44.478996 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:44.479017 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:44.479036 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:44.479055 | orchestrator | 2026-01-30 05:19:44.479074 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-01-30 05:19:44.479089 | orchestrator | Friday 30 January 2026 05:19:23 +0000 (0:00:02.014) 0:00:09.506 ******** 2026-01-30 05:19:44.479102 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:44.479115 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:44.479127 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:44.479139 | orchestrator | 2026-01-30 05:19:44.479152 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-30 05:19:44.479165 | orchestrator | Friday 30 January 2026 05:19:25 +0000 (0:00:01.937) 0:00:11.444 ******** 2026-01-30 05:19:44.479177 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:44.479205 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:44.479216 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:44.479227 | orchestrator | 2026-01-30 05:19:44.479239 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-30 05:19:44.479250 | orchestrator | Friday 30 January 2026 05:19:26 +0000 (0:00:01.579) 0:00:13.024 ******** 2026-01-30 05:19:44.479286 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:19:44.479297 | orchestrator | 2026-01-30 05:19:44.479308 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-30 05:19:44.479319 | orchestrator | Friday 30 January 2026 05:19:28 +0000 (0:00:01.803) 0:00:14.827 ******** 2026-01-30 05:19:44.479329 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:44.479340 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:44.479351 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:44.479362 | orchestrator | 2026-01-30 05:19:44.479373 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-30 05:19:44.479384 | orchestrator | Friday 30 January 2026 05:19:30 +0000 (0:00:01.786) 0:00:16.614 ******** 2026-01-30 05:19:44.479394 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479405 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479416 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479427 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479437 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 05:19:44.479449 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 05:19:44.479460 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479470 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 05:19:44.479481 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 05:19:44.479492 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-30 05:19:44.479502 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-30 05:19:44.479513 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-30 05:19:44.479523 | orchestrator | 2026-01-30 05:19:44.479534 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-30 05:19:44.479547 | orchestrator | Friday 30 January 2026 05:19:35 +0000 (0:00:05.044) 0:00:21.659 ******** 2026-01-30 05:19:44.479566 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-01-30 05:19:44.479584 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-01-30 05:19:44.479603 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-01-30 05:19:44.479621 | orchestrator | 2026-01-30 05:19:44.479632 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-30 05:19:44.479665 | orchestrator | Friday 30 January 2026 05:19:37 +0000 (0:00:02.011) 0:00:23.670 ******** 2026-01-30 05:19:44.479677 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-01-30 05:19:44.479688 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-01-30 05:19:44.479699 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-01-30 05:19:44.479710 | orchestrator | 2026-01-30 05:19:44.479721 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-30 05:19:44.479731 | orchestrator | Friday 30 January 2026 05:19:39 +0000 (0:00:02.248) 0:00:25.919 ******** 2026-01-30 05:19:44.479742 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-30 05:19:44.479753 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:19:44.479764 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-30 05:19:44.479774 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:19:44.479786 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-30 05:19:44.479804 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:19:44.479822 | orchestrator | 2026-01-30 05:19:44.479867 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-30 05:19:44.479887 | orchestrator | Friday 30 January 2026 05:19:41 +0000 (0:00:01.891) 0:00:27.810 ******** 2026-01-30 05:19:44.479938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:44.479969 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:44.479990 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:44.480011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:44.480030 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:44.480065 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:55.225128 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:19:55.225213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:19:55.225221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:19:55.225227 | orchestrator | 2026-01-30 05:19:55.225234 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-30 05:19:55.225240 | orchestrator | Friday 30 January 2026 05:19:44 +0000 (0:00:02.681) 0:00:30.492 ******** 2026-01-30 05:19:55.225245 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:55.225252 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:55.225257 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:55.225263 | orchestrator | 2026-01-30 05:19:55.225268 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-30 05:19:55.225273 | orchestrator | Friday 30 January 2026 05:19:46 +0000 (0:00:01.990) 0:00:32.483 ******** 2026-01-30 05:19:55.225278 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-01-30 05:19:55.225285 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-01-30 05:19:55.225290 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-01-30 05:19:55.225295 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-01-30 05:19:55.225300 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-01-30 05:19:55.225305 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-01-30 05:19:55.225310 | orchestrator | 2026-01-30 05:19:55.225315 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-30 05:19:55.225321 | orchestrator | Friday 30 January 2026 05:19:49 +0000 (0:00:02.760) 0:00:35.243 ******** 2026-01-30 05:19:55.225326 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:55.225331 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:55.225336 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:55.225341 | orchestrator | 2026-01-30 05:19:55.225346 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-30 05:19:55.225351 | orchestrator | Friday 30 January 2026 05:19:51 +0000 (0:00:02.203) 0:00:37.447 ******** 2026-01-30 05:19:55.225356 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:19:55.225361 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:19:55.225366 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:19:55.225371 | orchestrator | 2026-01-30 05:19:55.225376 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-30 05:19:55.225381 | orchestrator | Friday 30 January 2026 05:19:53 +0000 (0:00:02.159) 0:00:39.606 ******** 2026-01-30 05:19:55.225387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 05:19:55.225421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:19:55.225431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:19:55.225438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:19:55.225445 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:19:55.225451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 05:19:55.225457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:19:55.225462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:19:55.225473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:19:55.225479 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:19:55.225491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 05:19:59.280114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:19:59.280197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:19:59.280207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:19:59.280213 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:19:59.280220 | orchestrator | 2026-01-30 05:19:59.280226 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-30 05:19:59.280249 | orchestrator | Friday 30 January 2026 05:19:55 +0000 (0:00:01.625) 0:00:41.232 ******** 2026-01-30 05:19:59.280255 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280261 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280284 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:19:59.280295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:19:59.280326 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280332 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:19:59.280337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:19:59.280350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:11.386272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:20:11.386419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99', '__omit_place_holder__cf8c4b4489e6c2b6b32e0be5e5c2829ba9109e99'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-30 05:20:11.386484 | orchestrator | 2026-01-30 05:20:11.386508 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-30 05:20:11.386528 | orchestrator | Friday 30 January 2026 05:19:59 +0000 (0:00:04.063) 0:00:45.296 ******** 2026-01-30 05:20:11.386549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:11.386722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:11.386734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:11.386746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:11.386757 | orchestrator | 2026-01-30 05:20:11.386768 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-30 05:20:11.386779 | orchestrator | Friday 30 January 2026 05:20:04 +0000 (0:00:05.154) 0:00:50.450 ******** 2026-01-30 05:20:11.386789 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 05:20:11.386801 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 05:20:11.386812 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-30 05:20:11.386862 | orchestrator | 2026-01-30 05:20:11.386876 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-30 05:20:11.386895 | orchestrator | Friday 30 January 2026 05:20:07 +0000 (0:00:02.612) 0:00:53.062 ******** 2026-01-30 05:20:11.386908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 05:20:11.386921 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 05:20:11.386933 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-30 05:20:11.386945 | orchestrator | 2026-01-30 05:20:11.386958 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-30 05:20:11.386981 | orchestrator | Friday 30 January 2026 05:20:11 +0000 (0:00:04.340) 0:00:57.403 ******** 2026-01-30 05:20:33.787001 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:33.787111 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:33.787124 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:33.787134 | orchestrator | 2026-01-30 05:20:33.787143 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-30 05:20:33.787152 | orchestrator | Friday 30 January 2026 05:20:13 +0000 (0:00:01.893) 0:00:59.296 ******** 2026-01-30 05:20:33.787161 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 05:20:33.787192 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 05:20:33.787200 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-30 05:20:33.787208 | orchestrator | 2026-01-30 05:20:33.787216 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-30 05:20:33.787224 | orchestrator | Friday 30 January 2026 05:20:16 +0000 (0:00:02.994) 0:01:02.291 ******** 2026-01-30 05:20:33.787232 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 05:20:33.787241 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 05:20:33.787249 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-30 05:20:33.787257 | orchestrator | 2026-01-30 05:20:33.787265 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-30 05:20:33.787272 | orchestrator | Friday 30 January 2026 05:20:19 +0000 (0:00:02.757) 0:01:05.048 ******** 2026-01-30 05:20:33.787280 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:20:33.787288 | orchestrator | 2026-01-30 05:20:33.787295 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-30 05:20:33.787303 | orchestrator | Friday 30 January 2026 05:20:20 +0000 (0:00:01.867) 0:01:06.916 ******** 2026-01-30 05:20:33.787326 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-01-30 05:20:33.787343 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-01-30 05:20:33.787351 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-01-30 05:20:33.787358 | orchestrator | 2026-01-30 05:20:33.787371 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-30 05:20:33.787384 | orchestrator | Friday 30 January 2026 05:20:23 +0000 (0:00:02.596) 0:01:09.512 ******** 2026-01-30 05:20:33.787397 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-30 05:20:33.787411 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-30 05:20:33.787424 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-30 05:20:33.787437 | orchestrator | 2026-01-30 05:20:33.787450 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-01-30 05:20:33.787466 | orchestrator | Friday 30 January 2026 05:20:26 +0000 (0:00:02.733) 0:01:12.246 ******** 2026-01-30 05:20:33.787479 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:33.787493 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:33.787509 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:33.787523 | orchestrator | 2026-01-30 05:20:33.787535 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-01-30 05:20:33.787545 | orchestrator | Friday 30 January 2026 05:20:27 +0000 (0:00:01.375) 0:01:13.621 ******** 2026-01-30 05:20:33.787554 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:33.787564 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:33.787575 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:33.787587 | orchestrator | 2026-01-30 05:20:33.787596 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-30 05:20:33.787605 | orchestrator | Friday 30 January 2026 05:20:29 +0000 (0:00:01.920) 0:01:15.542 ******** 2026-01-30 05:20:33.787618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787669 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787690 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787699 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787708 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:33.787718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:33.787739 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:33.787755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:37.345528 | orchestrator | 2026-01-30 05:20:37.345629 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-30 05:20:37.345646 | orchestrator | Friday 30 January 2026 05:20:33 +0000 (0:00:04.255) 0:01:19.798 ******** 2026-01-30 05:20:37.345662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 05:20:37.345678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:37.345689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:37.345702 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:37.345714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 05:20:37.345751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:37.345777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:37.345789 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:37.345892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 05:20:37.345908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:37.345920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:37.345932 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:37.345943 | orchestrator | 2026-01-30 05:20:37.345955 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-30 05:20:37.345966 | orchestrator | Friday 30 January 2026 05:20:35 +0000 (0:00:01.564) 0:01:21.362 ******** 2026-01-30 05:20:37.345978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 05:20:37.346001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:37.346082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:37.346099 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:37.346124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 05:20:48.810152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:48.810270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:48.810289 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:48.810305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 05:20:48.810341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:48.810354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:48.810410 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:48.810425 | orchestrator | 2026-01-30 05:20:48.810437 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-30 05:20:48.810450 | orchestrator | Friday 30 January 2026 05:20:37 +0000 (0:00:02.000) 0:01:23.363 ******** 2026-01-30 05:20:48.810461 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 05:20:48.810473 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 05:20:48.810484 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-30 05:20:48.810494 | orchestrator | 2026-01-30 05:20:48.810510 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-30 05:20:48.810530 | orchestrator | Friday 30 January 2026 05:20:39 +0000 (0:00:02.431) 0:01:25.794 ******** 2026-01-30 05:20:48.810549 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 05:20:48.810568 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 05:20:48.810587 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-30 05:20:48.810606 | orchestrator | 2026-01-30 05:20:48.810649 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-30 05:20:48.810672 | orchestrator | Friday 30 January 2026 05:20:42 +0000 (0:00:02.464) 0:01:28.259 ******** 2026-01-30 05:20:48.810692 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 05:20:48.810713 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 05:20:48.810732 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-30 05:20:48.810753 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 05:20:48.810774 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:48.810794 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 05:20:48.810838 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:48.810850 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-30 05:20:48.810862 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:48.810874 | orchestrator | 2026-01-30 05:20:48.810887 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-30 05:20:48.810915 | orchestrator | Friday 30 January 2026 05:20:44 +0000 (0:00:02.435) 0:01:30.695 ******** 2026-01-30 05:20:48.810928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:48.810941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:48.810952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:20:48.810971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:48.810994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:52.374338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:20:52.374493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:52.374512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:52.374521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:20:52.374529 | orchestrator | 2026-01-30 05:20:52.374539 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-30 05:20:52.374601 | orchestrator | Friday 30 January 2026 05:20:48 +0000 (0:00:04.130) 0:01:34.826 ******** 2026-01-30 05:20:52.374612 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:20:52.374621 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:20:52.374629 | orchestrator | } 2026-01-30 05:20:52.374637 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:20:52.374650 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:20:52.374662 | orchestrator | } 2026-01-30 05:20:52.374674 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:20:52.374685 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:20:52.374696 | orchestrator | } 2026-01-30 05:20:52.374709 | orchestrator | 2026-01-30 05:20:52.374722 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:20:52.374735 | orchestrator | Friday 30 January 2026 05:20:50 +0000 (0:00:01.355) 0:01:36.181 ******** 2026-01-30 05:20:52.374749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 05:20:52.374776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:52.374794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:52.374847 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:52.374870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 05:20:52.374878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:52.374888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:52.374896 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:52.374909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 05:20:52.374919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:20:52.374941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:20:57.788863 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:20:57.788994 | orchestrator | 2026-01-30 05:20:57.789022 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-30 05:20:57.789043 | orchestrator | Friday 30 January 2026 05:20:52 +0000 (0:00:02.203) 0:01:38.384 ******** 2026-01-30 05:20:57.789064 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:20:57.789084 | orchestrator | 2026-01-30 05:20:57.789104 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-30 05:20:57.789124 | orchestrator | Friday 30 January 2026 05:20:54 +0000 (0:00:01.883) 0:01:40.268 ******** 2026-01-30 05:20:57.789151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:20:57.789177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:57.789209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:20:57.789222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:20:57.789282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:20:57.789305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:57.789325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:20:57.789344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:20:57.789371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:20:57.789404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:57.789437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501764 | orchestrator | 2026-01-30 05:20:59.501774 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-30 05:20:59.501782 | orchestrator | Friday 30 January 2026 05:20:58 +0000 (0:00:04.660) 0:01:44.929 ******** 2026-01-30 05:20:59.501842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:20:59.501870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:20:59.501892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:59.501898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:59.501924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501930 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:20:59.501936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:20:59.501948 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:20:59.501962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:20:59.501968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-30 05:20:59.501978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:14.137186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-30 05:21:14.137291 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:14.137304 | orchestrator | 2026-01-30 05:21:14.137314 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-30 05:21:14.137323 | orchestrator | Friday 30 January 2026 05:21:00 +0000 (0:00:01.671) 0:01:46.601 ******** 2026-01-30 05:21:14.137332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137353 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:14.137367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137420 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:14.137450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:14.137482 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:14.137496 | orchestrator | 2026-01-30 05:21:14.137508 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-30 05:21:14.137516 | orchestrator | Friday 30 January 2026 05:21:02 +0000 (0:00:02.193) 0:01:48.795 ******** 2026-01-30 05:21:14.137524 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:14.137534 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:14.137542 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:14.137550 | orchestrator | 2026-01-30 05:21:14.137558 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-30 05:21:14.137566 | orchestrator | Friday 30 January 2026 05:21:05 +0000 (0:00:02.359) 0:01:51.154 ******** 2026-01-30 05:21:14.137574 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:14.137581 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:14.137589 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:14.137597 | orchestrator | 2026-01-30 05:21:14.137605 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-30 05:21:14.137613 | orchestrator | Friday 30 January 2026 05:21:07 +0000 (0:00:02.767) 0:01:53.922 ******** 2026-01-30 05:21:14.137621 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:21:14.137628 | orchestrator | 2026-01-30 05:21:14.137636 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-30 05:21:14.137644 | orchestrator | Friday 30 January 2026 05:21:09 +0000 (0:00:01.610) 0:01:55.533 ******** 2026-01-30 05:21:14.137672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:14.137684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:14.137701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:14.137716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:14.137725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:14.137741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:15.768099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768303 | orchestrator | 2026-01-30 05:21:15.768326 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-30 05:21:15.768346 | orchestrator | Friday 30 January 2026 05:21:14 +0000 (0:00:04.619) 0:02:00.153 ******** 2026-01-30 05:21:15.768370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:15.768394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768464 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:15.768477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:15.768496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:15.768519 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:15.768566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:15.768589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-30 05:21:31.998847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:21:31.998958 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:31.998975 | orchestrator | 2026-01-30 05:21:31.998984 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-30 05:21:31.998994 | orchestrator | Friday 30 January 2026 05:21:15 +0000 (0:00:01.628) 0:02:01.781 ******** 2026-01-30 05:21:31.999002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999039 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:31.999046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999061 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:31.999069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:31.999084 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:31.999090 | orchestrator | 2026-01-30 05:21:31.999098 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-30 05:21:31.999105 | orchestrator | Friday 30 January 2026 05:21:17 +0000 (0:00:01.762) 0:02:03.544 ******** 2026-01-30 05:21:31.999112 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:31.999120 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:31.999127 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:31.999133 | orchestrator | 2026-01-30 05:21:31.999140 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-30 05:21:31.999170 | orchestrator | Friday 30 January 2026 05:21:19 +0000 (0:00:02.256) 0:02:05.801 ******** 2026-01-30 05:21:31.999178 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:31.999185 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:31.999192 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:31.999199 | orchestrator | 2026-01-30 05:21:31.999206 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-30 05:21:31.999214 | orchestrator | Friday 30 January 2026 05:21:22 +0000 (0:00:02.898) 0:02:08.699 ******** 2026-01-30 05:21:31.999220 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:31.999227 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:31.999234 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:31.999242 | orchestrator | 2026-01-30 05:21:31.999249 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-30 05:21:31.999256 | orchestrator | Friday 30 January 2026 05:21:24 +0000 (0:00:01.357) 0:02:10.057 ******** 2026-01-30 05:21:31.999263 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:21:31.999271 | orchestrator | 2026-01-30 05:21:31.999277 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-30 05:21:31.999284 | orchestrator | Friday 30 January 2026 05:21:25 +0000 (0:00:01.670) 0:02:11.727 ******** 2026-01-30 05:21:31.999311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 05:21:31.999324 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 05:21:31.999333 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-30 05:21:31.999341 | orchestrator | 2026-01-30 05:21:31.999349 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-30 05:21:31.999365 | orchestrator | Friday 30 January 2026 05:21:29 +0000 (0:00:03.564) 0:02:15.292 ******** 2026-01-30 05:21:31.999373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 05:21:31.999381 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:31.999390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 05:21:31.999397 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:31.999416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-30 05:21:44.110123 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:44.110260 | orchestrator | 2026-01-30 05:21:44.110286 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-30 05:21:44.110305 | orchestrator | Friday 30 January 2026 05:21:31 +0000 (0:00:02.721) 0:02:18.013 ******** 2026-01-30 05:21:44.110345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110385 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:44.110426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110448 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:44.110458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-30 05:21:44.110478 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:44.110488 | orchestrator | 2026-01-30 05:21:44.110498 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-30 05:21:44.110508 | orchestrator | Friday 30 January 2026 05:21:34 +0000 (0:00:02.766) 0:02:20.780 ******** 2026-01-30 05:21:44.110518 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:44.110527 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:44.110536 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:44.110546 | orchestrator | 2026-01-30 05:21:44.110555 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-30 05:21:44.110567 | orchestrator | Friday 30 January 2026 05:21:36 +0000 (0:00:01.429) 0:02:22.210 ******** 2026-01-30 05:21:44.110578 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:44.110589 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:44.110600 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:44.110611 | orchestrator | 2026-01-30 05:21:44.110626 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-30 05:21:44.110642 | orchestrator | Friday 30 January 2026 05:21:38 +0000 (0:00:02.439) 0:02:24.650 ******** 2026-01-30 05:21:44.110659 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:21:44.110677 | orchestrator | 2026-01-30 05:21:44.110695 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-30 05:21:44.110712 | orchestrator | Friday 30 January 2026 05:21:40 +0000 (0:00:01.741) 0:02:26.391 ******** 2026-01-30 05:21:44.110808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:44.110852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:44.110871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:44.110891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:44.110911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:44.110950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:21:46.130310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130424 | orchestrator | 2026-01-30 05:21:46.130437 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-30 05:21:46.130450 | orchestrator | Friday 30 January 2026 05:21:45 +0000 (0:00:04.896) 0:02:31.288 ******** 2026-01-30 05:21:46.130463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:46.130475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:46.130519 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:46.130545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:57.256076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256216 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:57.256227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:21:57.256271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-30 05:21:57.256310 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:57.256318 | orchestrator | 2026-01-30 05:21:57.256326 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-30 05:21:57.256334 | orchestrator | Friday 30 January 2026 05:21:47 +0000 (0:00:01.939) 0:02:33.228 ******** 2026-01-30 05:21:57.256343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256361 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:57.256368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256390 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:57.256397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:21:57.256412 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:57.256420 | orchestrator | 2026-01-30 05:21:57.256427 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-30 05:21:57.256439 | orchestrator | Friday 30 January 2026 05:21:49 +0000 (0:00:02.038) 0:02:35.266 ******** 2026-01-30 05:21:57.256446 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:57.256454 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:57.256462 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:57.256469 | orchestrator | 2026-01-30 05:21:57.256477 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-30 05:21:57.256484 | orchestrator | Friday 30 January 2026 05:21:51 +0000 (0:00:02.310) 0:02:37.577 ******** 2026-01-30 05:21:57.256491 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:21:57.256498 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:21:57.256506 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:21:57.256513 | orchestrator | 2026-01-30 05:21:57.256520 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-30 05:21:57.256527 | orchestrator | Friday 30 January 2026 05:21:54 +0000 (0:00:02.832) 0:02:40.410 ******** 2026-01-30 05:21:57.256534 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:57.256542 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:57.256549 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:21:57.256556 | orchestrator | 2026-01-30 05:21:57.256563 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-30 05:21:57.256570 | orchestrator | Friday 30 January 2026 05:21:55 +0000 (0:00:01.544) 0:02:41.954 ******** 2026-01-30 05:21:57.256578 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:21:57.256585 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:21:57.256596 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:02.818270 | orchestrator | 2026-01-30 05:22:02.818370 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-30 05:22:02.818384 | orchestrator | Friday 30 January 2026 05:21:57 +0000 (0:00:01.317) 0:02:43.272 ******** 2026-01-30 05:22:02.818393 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:22:02.818402 | orchestrator | 2026-01-30 05:22:02.818411 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-30 05:22:02.818419 | orchestrator | Friday 30 January 2026 05:21:59 +0000 (0:00:01.881) 0:02:45.153 ******** 2026-01-30 05:22:02.818434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:02.818472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:02.818484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:02.818507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:02.818513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:02.818532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:02.818538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:02.818548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:02.818554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:02.818562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:02.818574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:04.622004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:04.622393 | orchestrator | 2026-01-30 05:22:04.622415 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-30 05:22:04.622435 | orchestrator | Friday 30 January 2026 05:22:04 +0000 (0:00:04.877) 0:02:50.031 ******** 2026-01-30 05:22:04.622462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:04.622486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:04.622529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860546 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:05.860562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:05.860619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:05.860634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:05.860699 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:05.861518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:20.434004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-30 05:22:20.434173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-30 05:22:20.434229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-30 05:22:20.434242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-30 05:22:20.434286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:22:20.434298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-30 05:22:20.434309 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:20.434320 | orchestrator | 2026-01-30 05:22:20.434331 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-30 05:22:20.434342 | orchestrator | Friday 30 January 2026 05:22:05 +0000 (0:00:01.847) 0:02:51.879 ******** 2026-01-30 05:22:20.434370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434395 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:20.434405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434425 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:20.434435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:20.434487 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:20.434500 | orchestrator | 2026-01-30 05:22:20.434511 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-30 05:22:20.434523 | orchestrator | Friday 30 January 2026 05:22:07 +0000 (0:00:02.026) 0:02:53.905 ******** 2026-01-30 05:22:20.434535 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:20.434546 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:20.434557 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:20.434568 | orchestrator | 2026-01-30 05:22:20.434580 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-30 05:22:20.434590 | orchestrator | Friday 30 January 2026 05:22:10 +0000 (0:00:02.253) 0:02:56.159 ******** 2026-01-30 05:22:20.434609 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:20.434620 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:20.434630 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:20.434641 | orchestrator | 2026-01-30 05:22:20.434652 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-30 05:22:20.434663 | orchestrator | Friday 30 January 2026 05:22:13 +0000 (0:00:02.916) 0:02:59.075 ******** 2026-01-30 05:22:20.434672 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:20.434682 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:20.434691 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:20.434700 | orchestrator | 2026-01-30 05:22:20.434710 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-30 05:22:20.434719 | orchestrator | Friday 30 January 2026 05:22:14 +0000 (0:00:01.296) 0:03:00.371 ******** 2026-01-30 05:22:20.434729 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:22:20.434739 | orchestrator | 2026-01-30 05:22:20.434748 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-30 05:22:20.434784 | orchestrator | Friday 30 January 2026 05:22:16 +0000 (0:00:01.778) 0:03:02.150 ******** 2026-01-30 05:22:20.434822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 05:22:21.480899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:21.480995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 05:22:21.481013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-30 05:22:21.481025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:21.481034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:24.316137 | orchestrator | 2026-01-30 05:22:24.316226 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-30 05:22:24.316238 | orchestrator | Friday 30 January 2026 05:22:21 +0000 (0:00:05.355) 0:03:07.506 ******** 2026-01-30 05:22:24.316277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 05:22:24.316303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:24.316342 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:24.316383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 05:22:24.316401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:24.316426 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:24.316453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-30 05:22:40.952194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-30 05:22:40.952346 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:40.952366 | orchestrator | 2026-01-30 05:22:40.952379 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-30 05:22:40.952413 | orchestrator | Friday 30 January 2026 05:22:25 +0000 (0:00:03.776) 0:03:11.283 ******** 2026-01-30 05:22:40.952427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952453 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:40.952465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952517 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:40.952529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-30 05:22:40.952551 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:40.952562 | orchestrator | 2026-01-30 05:22:40.952574 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-30 05:22:40.952585 | orchestrator | Friday 30 January 2026 05:22:29 +0000 (0:00:04.010) 0:03:15.293 ******** 2026-01-30 05:22:40.952596 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:40.952607 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:40.952627 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:40.952638 | orchestrator | 2026-01-30 05:22:40.952649 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-30 05:22:40.952660 | orchestrator | Friday 30 January 2026 05:22:31 +0000 (0:00:02.182) 0:03:17.476 ******** 2026-01-30 05:22:40.952671 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:40.952683 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:40.952696 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:40.952708 | orchestrator | 2026-01-30 05:22:40.952721 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-30 05:22:40.952734 | orchestrator | Friday 30 January 2026 05:22:34 +0000 (0:00:02.610) 0:03:20.086 ******** 2026-01-30 05:22:40.952773 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:40.952787 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:40.952799 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:40.952811 | orchestrator | 2026-01-30 05:22:40.952824 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-30 05:22:40.952837 | orchestrator | Friday 30 January 2026 05:22:35 +0000 (0:00:01.309) 0:03:21.396 ******** 2026-01-30 05:22:40.952849 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:22:40.952861 | orchestrator | 2026-01-30 05:22:40.952873 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-30 05:22:40.952885 | orchestrator | Friday 30 January 2026 05:22:36 +0000 (0:00:01.562) 0:03:22.958 ******** 2026-01-30 05:22:40.952899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:40.952923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:56.823309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:22:56.823436 | orchestrator | 2026-01-30 05:22:56.823451 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-30 05:22:56.823481 | orchestrator | Friday 30 January 2026 05:22:40 +0000 (0:00:04.012) 0:03:26.970 ******** 2026-01-30 05:22:56.823492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:56.823501 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:56.823511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:56.823520 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:56.823532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:22:56.823547 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:56.823568 | orchestrator | 2026-01-30 05:22:56.823590 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-30 05:22:56.823604 | orchestrator | Friday 30 January 2026 05:22:42 +0000 (0:00:01.555) 0:03:28.525 ******** 2026-01-30 05:22:56.823621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823655 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:56.823703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823779 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:56.823808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:22:56.823833 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:56.823843 | orchestrator | 2026-01-30 05:22:56.823852 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-30 05:22:56.823862 | orchestrator | Friday 30 January 2026 05:22:43 +0000 (0:00:01.413) 0:03:29.939 ******** 2026-01-30 05:22:56.823872 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:56.823882 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:56.823892 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:56.823902 | orchestrator | 2026-01-30 05:22:56.823911 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-30 05:22:56.823921 | orchestrator | Friday 30 January 2026 05:22:46 +0000 (0:00:02.124) 0:03:32.064 ******** 2026-01-30 05:22:56.823930 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:22:56.823940 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:22:56.823949 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:22:56.823959 | orchestrator | 2026-01-30 05:22:56.823967 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-30 05:22:56.823976 | orchestrator | Friday 30 January 2026 05:22:49 +0000 (0:00:02.969) 0:03:35.033 ******** 2026-01-30 05:22:56.823984 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:56.823993 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:22:56.824001 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:22:56.824010 | orchestrator | 2026-01-30 05:22:56.824018 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-30 05:22:56.824027 | orchestrator | Friday 30 January 2026 05:22:50 +0000 (0:00:01.312) 0:03:36.346 ******** 2026-01-30 05:22:56.824035 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:22:56.824044 | orchestrator | 2026-01-30 05:22:56.824052 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-30 05:22:56.824061 | orchestrator | Friday 30 January 2026 05:22:51 +0000 (0:00:01.670) 0:03:38.017 ******** 2026-01-30 05:22:56.824088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 05:22:58.544141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 05:22:58.544284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-30 05:22:58.544325 | orchestrator | 2026-01-30 05:22:58.544340 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-30 05:22:58.544353 | orchestrator | Friday 30 January 2026 05:22:56 +0000 (0:00:04.825) 0:03:42.842 ******** 2026-01-30 05:22:58.544368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 05:22:58.544382 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:22:58.544410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 05:23:07.258921 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:07.259045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-30 05:23:07.259091 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:07.259104 | orchestrator | 2026-01-30 05:23:07.259116 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-30 05:23:07.259128 | orchestrator | Friday 30 January 2026 05:22:58 +0000 (0:00:01.724) 0:03:44.566 ******** 2026-01-30 05:23:07.259141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 05:23:07.259206 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:07.259235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 05:23:07.259384 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:07.259396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-01-30 05:23:07.259435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-30 05:23:07.259449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-30 05:23:07.259462 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:07.259475 | orchestrator | 2026-01-30 05:23:07.259488 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-30 05:23:07.259501 | orchestrator | Friday 30 January 2026 05:23:00 +0000 (0:00:02.036) 0:03:46.603 ******** 2026-01-30 05:23:07.259514 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:07.259528 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:07.259540 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:07.259555 | orchestrator | 2026-01-30 05:23:07.259568 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-30 05:23:07.259580 | orchestrator | Friday 30 January 2026 05:23:02 +0000 (0:00:02.244) 0:03:48.848 ******** 2026-01-30 05:23:07.259593 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:07.259607 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:07.259619 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:07.259638 | orchestrator | 2026-01-30 05:23:07.259659 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-30 05:23:07.259674 | orchestrator | Friday 30 January 2026 05:23:05 +0000 (0:00:02.862) 0:03:51.710 ******** 2026-01-30 05:23:07.259685 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:07.259696 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:07.259707 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:07.259718 | orchestrator | 2026-01-30 05:23:07.259761 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-30 05:23:07.259778 | orchestrator | Friday 30 January 2026 05:23:07 +0000 (0:00:01.353) 0:03:53.064 ******** 2026-01-30 05:23:07.259799 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:17.241960 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:17.242132 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:17.242152 | orchestrator | 2026-01-30 05:23:17.242169 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-30 05:23:17.242184 | orchestrator | Friday 30 January 2026 05:23:08 +0000 (0:00:01.353) 0:03:54.418 ******** 2026-01-30 05:23:17.242197 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:23:17.242210 | orchestrator | 2026-01-30 05:23:17.242223 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-30 05:23:17.242236 | orchestrator | Friday 30 January 2026 05:23:10 +0000 (0:00:02.042) 0:03:56.461 ******** 2026-01-30 05:23:17.242280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-30 05:23:17.242300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:17.242330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:17.242345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-30 05:23:17.242378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:17.242400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:17.242413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-01-30 05:23:17.242433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:17.242447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:17.242460 | orchestrator | 2026-01-30 05:23:17.242473 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-30 05:23:17.242487 | orchestrator | Friday 30 January 2026 05:23:15 +0000 (0:00:04.843) 0:04:01.304 ******** 2026-01-30 05:23:17.242511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-30 05:23:19.058977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:19.059076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:19.059088 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:19.059117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-30 05:23:19.059129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:19.059138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:19.059169 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:19.059200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-01-30 05:23:19.059210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-30 05:23:19.059218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-30 05:23:19.059225 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:19.059234 | orchestrator | 2026-01-30 05:23:19.059243 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-30 05:23:19.059254 | orchestrator | Friday 30 January 2026 05:23:17 +0000 (0:00:01.955) 0:04:03.260 ******** 2026-01-30 05:23:19.059269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059292 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:19.059297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059314 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:19.059319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-01-30 05:23:19.059330 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:19.059335 | orchestrator | 2026-01-30 05:23:19.059340 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-30 05:23:19.059350 | orchestrator | Friday 30 January 2026 05:23:19 +0000 (0:00:01.814) 0:04:05.074 ******** 2026-01-30 05:23:34.408832 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:34.408949 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:34.408964 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:34.408976 | orchestrator | 2026-01-30 05:23:34.408988 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-30 05:23:34.409000 | orchestrator | Friday 30 January 2026 05:23:21 +0000 (0:00:02.225) 0:04:07.300 ******** 2026-01-30 05:23:34.409011 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:34.409022 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:34.409033 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:34.409044 | orchestrator | 2026-01-30 05:23:34.409055 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-30 05:23:34.409066 | orchestrator | Friday 30 January 2026 05:23:24 +0000 (0:00:03.290) 0:04:10.591 ******** 2026-01-30 05:23:34.409077 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:34.409089 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:34.409099 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:34.409110 | orchestrator | 2026-01-30 05:23:34.409121 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-30 05:23:34.409132 | orchestrator | Friday 30 January 2026 05:23:25 +0000 (0:00:01.371) 0:04:11.962 ******** 2026-01-30 05:23:34.409143 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:23:34.409154 | orchestrator | 2026-01-30 05:23:34.409167 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-30 05:23:34.409178 | orchestrator | Friday 30 January 2026 05:23:27 +0000 (0:00:01.760) 0:04:13.722 ******** 2026-01-30 05:23:34.409194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:34.409251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:34.409266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:34.409298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:34.409312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:34.409329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:34.409349 | orchestrator | 2026-01-30 05:23:34.409361 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-30 05:23:34.409373 | orchestrator | Friday 30 January 2026 05:23:32 +0000 (0:00:05.007) 0:04:18.730 ******** 2026-01-30 05:23:34.409385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:34.409405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:47.283818 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:47.283926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:47.283943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:47.283972 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:47.283994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:47.284003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:23:47.284012 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:47.284020 | orchestrator | 2026-01-30 05:23:47.284029 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-30 05:23:47.284037 | orchestrator | Friday 30 January 2026 05:23:34 +0000 (0:00:01.699) 0:04:20.429 ******** 2026-01-30 05:23:47.284059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284081 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:47.284090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284106 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:47.284114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:47.284137 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:47.284145 | orchestrator | 2026-01-30 05:23:47.284153 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-30 05:23:47.284161 | orchestrator | Friday 30 January 2026 05:23:36 +0000 (0:00:02.004) 0:04:22.434 ******** 2026-01-30 05:23:47.284169 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:47.284178 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:47.284186 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:47.284193 | orchestrator | 2026-01-30 05:23:47.284201 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-30 05:23:47.284209 | orchestrator | Friday 30 January 2026 05:23:38 +0000 (0:00:02.358) 0:04:24.793 ******** 2026-01-30 05:23:47.284216 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:23:47.284224 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:23:47.284232 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:23:47.284240 | orchestrator | 2026-01-30 05:23:47.284252 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-30 05:23:47.284260 | orchestrator | Friday 30 January 2026 05:23:41 +0000 (0:00:02.917) 0:04:27.710 ******** 2026-01-30 05:23:47.284268 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:23:47.284275 | orchestrator | 2026-01-30 05:23:47.284283 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-30 05:23:47.284291 | orchestrator | Friday 30 January 2026 05:23:43 +0000 (0:00:02.154) 0:04:29.864 ******** 2026-01-30 05:23:47.284300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:47.284316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:49.000595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:23:49.000948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:49.000991 | orchestrator | 2026-01-30 05:23:49.001004 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-30 05:23:49.001018 | orchestrator | Friday 30 January 2026 05:23:48 +0000 (0:00:04.511) 0:04:34.375 ******** 2026-01-30 05:23:49.001033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:49.001055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102149 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:52.102174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:52.102182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102238 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:23:52.102245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:23:52.102254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-30 05:23:52.102279 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:23:52.102285 | orchestrator | 2026-01-30 05:23:52.102292 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-30 05:23:52.102299 | orchestrator | Friday 30 January 2026 05:23:50 +0000 (0:00:01.829) 0:04:36.205 ******** 2026-01-30 05:23:52.102307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:52.102316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:52.102324 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:23:52.102330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:23:52.102341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:24:08.375819 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:08.375901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:24:08.375911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:24:08.375917 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:08.375922 | orchestrator | 2026-01-30 05:24:08.375926 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-30 05:24:08.375931 | orchestrator | Friday 30 January 2026 05:23:52 +0000 (0:00:01.912) 0:04:38.118 ******** 2026-01-30 05:24:08.375935 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:24:08.375941 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:24:08.375948 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:24:08.375955 | orchestrator | 2026-01-30 05:24:08.375959 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-30 05:24:08.375973 | orchestrator | Friday 30 January 2026 05:23:55 +0000 (0:00:03.198) 0:04:41.316 ******** 2026-01-30 05:24:08.375979 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:24:08.375986 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:24:08.375992 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:24:08.375998 | orchestrator | 2026-01-30 05:24:08.376025 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-30 05:24:08.376033 | orchestrator | Friday 30 January 2026 05:23:58 +0000 (0:00:02.857) 0:04:44.174 ******** 2026-01-30 05:24:08.376039 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:24:08.376045 | orchestrator | 2026-01-30 05:24:08.376051 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-30 05:24:08.376058 | orchestrator | Friday 30 January 2026 05:24:00 +0000 (0:00:02.482) 0:04:46.656 ******** 2026-01-30 05:24:08.376064 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:24:08.376070 | orchestrator | 2026-01-30 05:24:08.376077 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-30 05:24:08.376084 | orchestrator | Friday 30 January 2026 05:24:04 +0000 (0:00:04.245) 0:04:50.902 ******** 2026-01-30 05:24:08.376095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:08.376135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:08.376145 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:08.376156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:08.376170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:08.376177 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:08.376190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:11.969202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:11.969306 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:11.969324 | orchestrator | 2026-01-30 05:24:11.969336 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-30 05:24:11.969347 | orchestrator | Friday 30 January 2026 05:24:08 +0000 (0:00:03.481) 0:04:54.384 ******** 2026-01-30 05:24:11.969407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:11.969445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:11.969458 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:11.969494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:11.969514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:11.969524 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:11.969534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:24:11.969552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-30 05:24:27.626375 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:27.626472 | orchestrator | 2026-01-30 05:24:27.626483 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-30 05:24:27.626492 | orchestrator | Friday 30 January 2026 05:24:11 +0000 (0:00:03.594) 0:04:57.978 ******** 2026-01-30 05:24:27.626515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626552 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:27.626560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626575 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:27.626583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-30 05:24:27.626598 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:27.626606 | orchestrator | 2026-01-30 05:24:27.626613 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-30 05:24:27.626621 | orchestrator | Friday 30 January 2026 05:24:15 +0000 (0:00:03.644) 0:05:01.623 ******** 2026-01-30 05:24:27.626628 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:24:27.626648 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:24:27.626656 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:24:27.626663 | orchestrator | 2026-01-30 05:24:27.626670 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-30 05:24:27.626741 | orchestrator | Friday 30 January 2026 05:24:18 +0000 (0:00:02.843) 0:05:04.467 ******** 2026-01-30 05:24:27.626750 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:27.626757 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:27.626764 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:27.626771 | orchestrator | 2026-01-30 05:24:27.626778 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-30 05:24:27.626785 | orchestrator | Friday 30 January 2026 05:24:21 +0000 (0:00:02.706) 0:05:07.174 ******** 2026-01-30 05:24:27.626793 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:27.626800 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:27.626807 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:27.626814 | orchestrator | 2026-01-30 05:24:27.626826 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-30 05:24:27.626833 | orchestrator | Friday 30 January 2026 05:24:22 +0000 (0:00:01.382) 0:05:08.556 ******** 2026-01-30 05:24:27.626841 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:24:27.626848 | orchestrator | 2026-01-30 05:24:27.626855 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-30 05:24:27.626862 | orchestrator | Friday 30 January 2026 05:24:24 +0000 (0:00:02.296) 0:05:10.852 ******** 2026-01-30 05:24:27.626870 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:24:27.626879 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:24:27.626887 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:24:27.626894 | orchestrator | 2026-01-30 05:24:27.626902 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-30 05:24:27.626910 | orchestrator | Friday 30 January 2026 05:24:27 +0000 (0:00:02.570) 0:05:13.423 ******** 2026-01-30 05:24:27.626923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:24:42.745159 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:42.745274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:24:42.745291 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:42.745300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:24:42.745309 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:42.745317 | orchestrator | 2026-01-30 05:24:42.745327 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-30 05:24:42.745335 | orchestrator | Friday 30 January 2026 05:24:29 +0000 (0:00:01.865) 0:05:15.289 ******** 2026-01-30 05:24:42.745345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 05:24:42.745355 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:42.745363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 05:24:42.745371 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:42.745379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-30 05:24:42.745386 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:42.745394 | orchestrator | 2026-01-30 05:24:42.745402 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-30 05:24:42.745427 | orchestrator | Friday 30 January 2026 05:24:30 +0000 (0:00:01.453) 0:05:16.743 ******** 2026-01-30 05:24:42.745435 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:42.745443 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:42.745451 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:42.745458 | orchestrator | 2026-01-30 05:24:42.745466 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-30 05:24:42.745474 | orchestrator | Friday 30 January 2026 05:24:32 +0000 (0:00:01.486) 0:05:18.230 ******** 2026-01-30 05:24:42.745481 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:42.745489 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:42.745497 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:42.745504 | orchestrator | 2026-01-30 05:24:42.745512 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-30 05:24:42.745520 | orchestrator | Friday 30 January 2026 05:24:34 +0000 (0:00:02.232) 0:05:20.462 ******** 2026-01-30 05:24:42.745527 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:42.745535 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:42.745543 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:24:42.745550 | orchestrator | 2026-01-30 05:24:42.745558 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-30 05:24:42.745566 | orchestrator | Friday 30 January 2026 05:24:36 +0000 (0:00:01.597) 0:05:22.059 ******** 2026-01-30 05:24:42.745574 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:24:42.745582 | orchestrator | 2026-01-30 05:24:42.745589 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-30 05:24:42.745597 | orchestrator | Friday 30 January 2026 05:24:37 +0000 (0:00:01.952) 0:05:24.011 ******** 2026-01-30 05:24:42.745627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:24:42.745640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:42.745650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:42.745666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:42.745706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.081284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.081384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.081401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:43.081435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:43.081449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.081463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:43.081504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.081517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.081531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:24:43.081553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:43.081566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:24:43.081592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.170832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:43.170952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:43.170969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.170982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:24:43.171024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.171038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.171056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.171067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:43.171078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:43.171092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:43.171112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:43.301042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.301143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.301164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:43.301183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.301200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.301244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.301271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:43.301319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:24:43.301340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:43.301350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:43.301365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:43.301375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:43.301397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:45.074760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.074869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.074897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:24:45.074949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:45.074978 | orchestrator | 2026-01-30 05:24:45.074999 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-30 05:24:45.075048 | orchestrator | Friday 30 January 2026 05:24:44 +0000 (0:00:06.188) 0:05:30.200 ******** 2026-01-30 05:24:45.075094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:24:45.075118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.075140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:24:45.075171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:45.075207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.075243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:45.136353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:45.136436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:45.136444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.136463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.136467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.136483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.136488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.136494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.136501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:45.136509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:45.136513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:24:45.136521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:45.188487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:45.188571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.188594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.188605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.188615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-01-30 05:24:45.188640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:45.188650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:45.188659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.188750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-01-30 05:24:45.188761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:45.188795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:45.188808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:46.375437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:46.375559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:24:46.375573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:46.375583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:24:46.375591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:46.375600 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:24:46.375623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:24:46.375633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:46.375647 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:24:46.375658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-01-30 05:24:46.375733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:24:46.375745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-30 05:24:46.375753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-01-30 05:24:46.375768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-01-30 05:25:00.589007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-30 05:25:00.589163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-30 05:25:00.589189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-30 05:25:00.589202 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:00.589215 | orchestrator | 2026-01-30 05:25:00.589228 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-30 05:25:00.589246 | orchestrator | Friday 30 January 2026 05:24:46 +0000 (0:00:02.195) 0:05:32.395 ******** 2026-01-30 05:25:00.589265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589305 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:25:00.589324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589361 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:00.589380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:00.589472 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:25:00.589489 | orchestrator | 2026-01-30 05:25:00.589508 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-30 05:25:00.589526 | orchestrator | Friday 30 January 2026 05:24:48 +0000 (0:00:02.181) 0:05:34.576 ******** 2026-01-30 05:25:00.589548 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:00.589570 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:00.589590 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:00.589610 | orchestrator | 2026-01-30 05:25:00.589631 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-30 05:25:00.589652 | orchestrator | Friday 30 January 2026 05:24:50 +0000 (0:00:02.264) 0:05:36.841 ******** 2026-01-30 05:25:00.589721 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:00.589744 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:00.589765 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:00.589787 | orchestrator | 2026-01-30 05:25:00.589808 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-30 05:25:00.589828 | orchestrator | Friday 30 January 2026 05:24:53 +0000 (0:00:02.895) 0:05:39.736 ******** 2026-01-30 05:25:00.589849 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:25:00.589870 | orchestrator | 2026-01-30 05:25:00.589901 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-30 05:25:00.589922 | orchestrator | Friday 30 January 2026 05:24:55 +0000 (0:00:02.250) 0:05:41.987 ******** 2026-01-30 05:25:00.589947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:25:00.589969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:25:00.590220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:25:17.465062 | orchestrator | 2026-01-30 05:25:17.465208 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-30 05:25:17.465238 | orchestrator | Friday 30 January 2026 05:25:00 +0000 (0:00:04.618) 0:05:46.605 ******** 2026-01-30 05:25:17.465281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:25:17.465306 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:25:17.465325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:25:17.465342 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:25:17.465359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:25:17.465406 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:17.465425 | orchestrator | 2026-01-30 05:25:17.465441 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-30 05:25:17.465459 | orchestrator | Friday 30 January 2026 05:25:02 +0000 (0:00:01.566) 0:05:48.172 ******** 2026-01-30 05:25:17.465477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465547 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:25:17.465565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465605 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:25:17.465629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:25:17.465744 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:17.465762 | orchestrator | 2026-01-30 05:25:17.465778 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-30 05:25:17.465796 | orchestrator | Friday 30 January 2026 05:25:04 +0000 (0:00:01.879) 0:05:50.052 ******** 2026-01-30 05:25:17.465814 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:17.465831 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:17.465848 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:17.465866 | orchestrator | 2026-01-30 05:25:17.465883 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-30 05:25:17.465900 | orchestrator | Friday 30 January 2026 05:25:06 +0000 (0:00:02.304) 0:05:52.357 ******** 2026-01-30 05:25:17.465916 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:17.465932 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:17.465947 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:17.465965 | orchestrator | 2026-01-30 05:25:17.465981 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-30 05:25:17.466090 | orchestrator | Friday 30 January 2026 05:25:09 +0000 (0:00:02.966) 0:05:55.323 ******** 2026-01-30 05:25:17.466109 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:25:17.466119 | orchestrator | 2026-01-30 05:25:17.466129 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-30 05:25:17.466139 | orchestrator | Friday 30 January 2026 05:25:11 +0000 (0:00:02.247) 0:05:57.570 ******** 2026-01-30 05:25:17.466151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:17.466177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:18.597550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:18.597719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:18.597763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:18.597831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:25:18.597853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:18.597901 | orchestrator | 2026-01-30 05:25:18.597914 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-30 05:25:18.597934 | orchestrator | Friday 30 January 2026 05:25:18 +0000 (0:00:07.046) 0:06:04.617 ******** 2026-01-30 05:25:19.361290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:19.361439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:19.361458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:19.361471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:19.361483 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:25:19.361512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:19.361534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:19.361552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:19.361563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:19.361573 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:25:19.361583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:19.361607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:25:40.407917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-30 05:25:40.408028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-30 05:25:40.408044 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:40.408058 | orchestrator | 2026-01-30 05:25:40.408070 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-30 05:25:40.408083 | orchestrator | Friday 30 January 2026 05:25:20 +0000 (0:00:01.892) 0:06:06.510 ******** 2026-01-30 05:25:40.408095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408145 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:25:40.408157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408225 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:25:40.408252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:25:40.408313 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:25:40.408323 | orchestrator | 2026-01-30 05:25:40.408335 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-30 05:25:40.408346 | orchestrator | Friday 30 January 2026 05:25:23 +0000 (0:00:02.526) 0:06:09.037 ******** 2026-01-30 05:25:40.408357 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:40.408368 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:40.408379 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:40.408390 | orchestrator | 2026-01-30 05:25:40.408401 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-30 05:25:40.408412 | orchestrator | Friday 30 January 2026 05:25:25 +0000 (0:00:02.320) 0:06:11.357 ******** 2026-01-30 05:25:40.408423 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:25:40.408434 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:25:40.408444 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:25:40.408455 | orchestrator | 2026-01-30 05:25:40.408465 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-30 05:25:40.408476 | orchestrator | Friday 30 January 2026 05:25:28 +0000 (0:00:03.133) 0:06:14.490 ******** 2026-01-30 05:25:40.408486 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:25:40.408496 | orchestrator | 2026-01-30 05:25:40.408506 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-30 05:25:40.408516 | orchestrator | Friday 30 January 2026 05:25:31 +0000 (0:00:02.778) 0:06:17.269 ******** 2026-01-30 05:25:40.408526 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-30 05:25:40.408537 | orchestrator | 2026-01-30 05:25:40.408546 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-30 05:25:40.408557 | orchestrator | Friday 30 January 2026 05:25:32 +0000 (0:00:01.709) 0:06:18.978 ******** 2026-01-30 05:25:40.408569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 05:25:40.408581 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 05:25:40.408602 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-30 05:25:40.408613 | orchestrator | 2026-01-30 05:25:40.408623 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-30 05:25:40.408636 | orchestrator | Friday 30 January 2026 05:25:38 +0000 (0:00:05.311) 0:06:24.290 ******** 2026-01-30 05:25:40.408674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:25:40.408694 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:02.428354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428449 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:02.428460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428467 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:02.428474 | orchestrator | 2026-01-30 05:26:02.428480 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-30 05:26:02.428487 | orchestrator | Friday 30 January 2026 05:25:40 +0000 (0:00:02.136) 0:06:26.426 ******** 2026-01-30 05:26:02.428493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428511 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:02.428521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428573 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:02.428579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-30 05:26:02.428590 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:02.428596 | orchestrator | 2026-01-30 05:26:02.428601 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 05:26:02.428607 | orchestrator | Friday 30 January 2026 05:25:42 +0000 (0:00:02.313) 0:06:28.739 ******** 2026-01-30 05:26:02.428612 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:02.428619 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:02.428631 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:02.428701 | orchestrator | 2026-01-30 05:26:02.428707 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 05:26:02.428713 | orchestrator | Friday 30 January 2026 05:25:46 +0000 (0:00:04.103) 0:06:32.843 ******** 2026-01-30 05:26:02.428718 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:02.428723 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:02.428729 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:02.428734 | orchestrator | 2026-01-30 05:26:02.428739 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-30 05:26:02.428745 | orchestrator | Friday 30 January 2026 05:25:50 +0000 (0:00:03.670) 0:06:36.514 ******** 2026-01-30 05:26:02.428751 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-30 05:26:02.428758 | orchestrator | 2026-01-30 05:26:02.428776 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-30 05:26:02.428782 | orchestrator | Friday 30 January 2026 05:25:52 +0000 (0:00:01.754) 0:06:38.268 ******** 2026-01-30 05:26:02.428801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428809 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:02.428815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428820 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:02.428826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428837 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:02.428842 | orchestrator | 2026-01-30 05:26:02.428848 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-30 05:26:02.428853 | orchestrator | Friday 30 January 2026 05:25:54 +0000 (0:00:02.305) 0:06:40.574 ******** 2026-01-30 05:26:02.428859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428865 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:02.428870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428876 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:02.428881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-30 05:26:02.428887 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:02.428892 | orchestrator | 2026-01-30 05:26:02.428898 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-30 05:26:02.428904 | orchestrator | Friday 30 January 2026 05:25:56 +0000 (0:00:02.335) 0:06:42.909 ******** 2026-01-30 05:26:02.428910 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:02.428916 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:02.428922 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:02.428928 | orchestrator | 2026-01-30 05:26:02.428938 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 05:26:02.428945 | orchestrator | Friday 30 January 2026 05:25:59 +0000 (0:00:02.143) 0:06:45.052 ******** 2026-01-30 05:26:02.428954 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:02.428964 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:02.428974 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:02.428982 | orchestrator | 2026-01-30 05:26:02.428989 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 05:26:02.428996 | orchestrator | Friday 30 January 2026 05:26:02 +0000 (0:00:03.388) 0:06:48.441 ******** 2026-01-30 05:26:30.311118 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:30.311199 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:30.311205 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:30.311210 | orchestrator | 2026-01-30 05:26:30.311215 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-30 05:26:30.311220 | orchestrator | Friday 30 January 2026 05:26:06 +0000 (0:00:03.729) 0:06:52.171 ******** 2026-01-30 05:26:30.311224 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-30 05:26:30.311244 | orchestrator | 2026-01-30 05:26:30.311249 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-30 05:26:30.311254 | orchestrator | Friday 30 January 2026 05:26:08 +0000 (0:00:02.318) 0:06:54.489 ******** 2026-01-30 05:26:30.311260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311268 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:30.311272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311276 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:30.311280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311284 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:30.311288 | orchestrator | 2026-01-30 05:26:30.311292 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-30 05:26:30.311297 | orchestrator | Friday 30 January 2026 05:26:10 +0000 (0:00:02.432) 0:06:56.921 ******** 2026-01-30 05:26:30.311300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311304 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:30.311311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311315 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:30.311339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-30 05:26:30.311351 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:30.311354 | orchestrator | 2026-01-30 05:26:30.311358 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-30 05:26:30.311362 | orchestrator | Friday 30 January 2026 05:26:13 +0000 (0:00:02.559) 0:06:59.481 ******** 2026-01-30 05:26:30.311366 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:30.311370 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:30.311373 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:30.311377 | orchestrator | 2026-01-30 05:26:30.311381 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-30 05:26:30.311385 | orchestrator | Friday 30 January 2026 05:26:16 +0000 (0:00:02.685) 0:07:02.167 ******** 2026-01-30 05:26:30.311389 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:30.311393 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:30.311396 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:30.311400 | orchestrator | 2026-01-30 05:26:30.311404 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-30 05:26:30.311408 | orchestrator | Friday 30 January 2026 05:26:19 +0000 (0:00:03.565) 0:07:05.732 ******** 2026-01-30 05:26:30.311412 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:30.311415 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:30.311419 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:30.311423 | orchestrator | 2026-01-30 05:26:30.311427 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-30 05:26:30.311430 | orchestrator | Friday 30 January 2026 05:26:24 +0000 (0:00:04.330) 0:07:10.063 ******** 2026-01-30 05:26:30.311434 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:26:30.311438 | orchestrator | 2026-01-30 05:26:30.311442 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-30 05:26:30.311445 | orchestrator | Friday 30 January 2026 05:26:26 +0000 (0:00:02.400) 0:07:12.463 ******** 2026-01-30 05:26:30.311450 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 05:26:30.311456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:30.311462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:30.311478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:32.411572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:32.411796 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 05:26:32.411820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:32.411834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:32.411846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:32.411899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:32.412009 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-30 05:26:32.412033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:32.412062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:32.412084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:32.412119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:32.412139 | orchestrator | 2026-01-30 05:26:32.412160 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-30 05:26:32.412178 | orchestrator | Friday 30 January 2026 05:26:31 +0000 (0:00:04.989) 0:07:17.453 ******** 2026-01-30 05:26:32.412222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 05:26:33.569934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:33.570103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:33.570122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:33.570135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:33.570172 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:33.570187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 05:26:33.570203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:33.570245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:33.570266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:33.570284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:33.570315 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:33.570387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-30 05:26:33.570420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-30 05:26:33.570454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-30 05:26:50.458573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-30 05:26:50.458705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-30 05:26:50.458718 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:50.458727 | orchestrator | 2026-01-30 05:26:50.458734 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-30 05:26:50.458741 | orchestrator | Friday 30 January 2026 05:26:33 +0000 (0:00:02.143) 0:07:19.597 ******** 2026-01-30 05:26:50.458771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458790 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:50.458795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458809 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:50.458815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-30 05:26:50.458828 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:50.458834 | orchestrator | 2026-01-30 05:26:50.458840 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-30 05:26:50.458846 | orchestrator | Friday 30 January 2026 05:26:35 +0000 (0:00:02.101) 0:07:21.699 ******** 2026-01-30 05:26:50.458851 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:50.458858 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:50.458877 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:50.458885 | orchestrator | 2026-01-30 05:26:50.458893 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-30 05:26:50.458899 | orchestrator | Friday 30 January 2026 05:26:37 +0000 (0:00:02.259) 0:07:23.959 ******** 2026-01-30 05:26:50.458905 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:26:50.458911 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:26:50.458917 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:26:50.458923 | orchestrator | 2026-01-30 05:26:50.458928 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-30 05:26:50.458935 | orchestrator | Friday 30 January 2026 05:26:41 +0000 (0:00:03.118) 0:07:27.077 ******** 2026-01-30 05:26:50.458941 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:26:50.458948 | orchestrator | 2026-01-30 05:26:50.458953 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-30 05:26:50.458959 | orchestrator | Friday 30 January 2026 05:26:43 +0000 (0:00:02.431) 0:07:29.509 ******** 2026-01-30 05:26:50.458984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:26:50.459002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:26:50.459009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:26:50.459021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:26:50.459035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:26:54.338182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:26:54.338297 | orchestrator | 2026-01-30 05:26:54.338321 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-30 05:26:54.338338 | orchestrator | Friday 30 January 2026 05:26:50 +0000 (0:00:06.968) 0:07:36.478 ******** 2026-01-30 05:26:54.338355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:26:54.338393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:26:54.338410 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:26:54.338448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:26:54.338489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:26:54.338504 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:26:54.338526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:26:54.338543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:26:54.338567 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:26:54.338583 | orchestrator | 2026-01-30 05:26:54.338600 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-30 05:26:54.338616 | orchestrator | Friday 30 January 2026 05:26:52 +0000 (0:00:02.081) 0:07:38.559 ******** 2026-01-30 05:26:54.338661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:26:54.338683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111842 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:03.111850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:03.111855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111864 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:03.111868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:03.111872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-01-30 05:27:03.111891 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:03.111895 | orchestrator | 2026-01-30 05:27:03.111900 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-30 05:27:03.111905 | orchestrator | Friday 30 January 2026 05:26:54 +0000 (0:00:01.803) 0:07:40.363 ******** 2026-01-30 05:27:03.111909 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:03.111912 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:03.111916 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:03.111920 | orchestrator | 2026-01-30 05:27:03.111924 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-30 05:27:03.111940 | orchestrator | Friday 30 January 2026 05:26:55 +0000 (0:00:01.448) 0:07:41.811 ******** 2026-01-30 05:27:03.111944 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:03.111948 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:03.111952 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:03.111956 | orchestrator | 2026-01-30 05:27:03.111959 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-30 05:27:03.111963 | orchestrator | Friday 30 January 2026 05:26:58 +0000 (0:00:02.256) 0:07:44.068 ******** 2026-01-30 05:27:03.111967 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:27:03.111972 | orchestrator | 2026-01-30 05:27:03.111975 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-30 05:27:03.111979 | orchestrator | Friday 30 January 2026 05:27:00 +0000 (0:00:02.447) 0:07:46.516 ******** 2026-01-30 05:27:03.111997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-30 05:27:03.112004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:03.112009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:03.112014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:03.112021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-30 05:27:03.112029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:03.112033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:03.112040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:05.045186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-01-30 05:27:05.045211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:05.045219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:05.045252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:27:05.045270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:05.045278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:05.045291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:05.045304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:27:07.206232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:27:07.206350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:07.206369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:07.206381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.206395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.206428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.206458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:07.206471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.206483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:07.206495 | orchestrator | 2026-01-30 05:27:07.206510 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-30 05:27:07.206524 | orchestrator | Friday 30 January 2026 05:27:06 +0000 (0:00:05.794) 0:07:52.310 ******** 2026-01-30 05:27:07.206537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-30 05:27:07.206552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:07.206569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:07.409449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:27:07.409461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:07.409470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-30 05:27:07.409531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:07.409547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:07.409556 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:07.409566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:07.409583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:07.409608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:27:08.695907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:08.695985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:08.695993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:08.696000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:08.696027 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:08.696038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-01-30 05:27:08.696075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-30 05:27:08.696085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:08.696092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:08.696099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-30 05:27:08.696107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:27:08.696122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-01-30 05:27:08.696141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:20.686195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:27:20.686290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-30 05:27:20.686301 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:20.686310 | orchestrator | 2026-01-30 05:27:20.686318 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-30 05:27:20.686326 | orchestrator | Friday 30 January 2026 05:27:08 +0000 (0:00:02.410) 0:07:54.720 ******** 2026-01-30 05:27:20.686335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686386 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:20.686393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686447 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:20.686454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-01-30 05:27:20.686467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-01-30 05:27:20.686487 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:20.686494 | orchestrator | 2026-01-30 05:27:20.686501 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-30 05:27:20.686508 | orchestrator | Friday 30 January 2026 05:27:10 +0000 (0:00:01.834) 0:07:56.555 ******** 2026-01-30 05:27:20.686514 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:20.686521 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:20.686528 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:20.686534 | orchestrator | 2026-01-30 05:27:20.686541 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-30 05:27:20.686547 | orchestrator | Friday 30 January 2026 05:27:12 +0000 (0:00:01.901) 0:07:58.456 ******** 2026-01-30 05:27:20.686554 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:20.686561 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:20.686567 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:20.686574 | orchestrator | 2026-01-30 05:27:20.686580 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-30 05:27:20.686587 | orchestrator | Friday 30 January 2026 05:27:14 +0000 (0:00:02.300) 0:08:00.757 ******** 2026-01-30 05:27:20.686593 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:27:20.686600 | orchestrator | 2026-01-30 05:27:20.686607 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-30 05:27:20.686613 | orchestrator | Friday 30 January 2026 05:27:16 +0000 (0:00:02.191) 0:08:02.948 ******** 2026-01-30 05:27:20.686648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:27:20.686671 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:27:37.757220 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:27:37.757348 | orchestrator | 2026-01-30 05:27:37.757363 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-30 05:27:37.757372 | orchestrator | Friday 30 January 2026 05:27:20 +0000 (0:00:03.750) 0:08:06.698 ******** 2026-01-30 05:27:37.757380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:27:37.757388 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:37.757396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:27:37.757416 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:37.757439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:27:37.757451 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:37.757458 | orchestrator | 2026-01-30 05:27:37.757464 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-30 05:27:37.757470 | orchestrator | Friday 30 January 2026 05:27:22 +0000 (0:00:01.470) 0:08:08.169 ******** 2026-01-30 05:27:37.757478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 05:27:37.757486 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:37.757493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 05:27:37.757500 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:37.757507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-30 05:27:37.757513 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:37.757520 | orchestrator | 2026-01-30 05:27:37.757527 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-30 05:27:37.757534 | orchestrator | Friday 30 January 2026 05:27:23 +0000 (0:00:01.549) 0:08:09.719 ******** 2026-01-30 05:27:37.757541 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:37.757548 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:37.757554 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:37.757561 | orchestrator | 2026-01-30 05:27:37.757568 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-30 05:27:37.757574 | orchestrator | Friday 30 January 2026 05:27:25 +0000 (0:00:01.815) 0:08:11.534 ******** 2026-01-30 05:27:37.757581 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:37.757588 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:37.757594 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:27:37.757601 | orchestrator | 2026-01-30 05:27:37.757609 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-30 05:27:37.757658 | orchestrator | Friday 30 January 2026 05:27:27 +0000 (0:00:02.112) 0:08:13.647 ******** 2026-01-30 05:27:37.757668 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:27:37.757675 | orchestrator | 2026-01-30 05:27:37.757682 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-30 05:27:37.757688 | orchestrator | Friday 30 January 2026 05:27:29 +0000 (0:00:02.344) 0:08:15.991 ******** 2026-01-30 05:27:37.757696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-30 05:27:37.757710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-30 05:27:37.757730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-01-30 05:27:39.470382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:27:39.470503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:27:39.470559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-01-30 05:27:39.470574 | orchestrator | 2026-01-30 05:27:39.470588 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-30 05:27:39.470600 | orchestrator | Friday 30 January 2026 05:27:37 +0000 (0:00:07.784) 0:08:23.776 ******** 2026-01-30 05:27:39.470711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-30 05:27:39.470737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:27:39.470758 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:27:39.470790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-30 05:27:39.470828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:27:39.470842 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:27:39.470864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-01-30 05:28:00.663955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-01-30 05:28:00.664071 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664085 | orchestrator | 2026-01-30 05:28:00.664094 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-30 05:28:00.664104 | orchestrator | Friday 30 January 2026 05:27:39 +0000 (0:00:01.714) 0:08:25.490 ******** 2026-01-30 05:28:00.664132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664173 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664215 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-01-30 05:28:00.664253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-01-30 05:28:00.664308 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664316 | orchestrator | 2026-01-30 05:28:00.664324 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-30 05:28:00.664332 | orchestrator | Friday 30 January 2026 05:27:41 +0000 (0:00:02.013) 0:08:27.503 ******** 2026-01-30 05:28:00.664347 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:28:00.664356 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:28:00.664364 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:28:00.664372 | orchestrator | 2026-01-30 05:28:00.664380 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-30 05:28:00.664388 | orchestrator | Friday 30 January 2026 05:27:43 +0000 (0:00:02.294) 0:08:29.798 ******** 2026-01-30 05:28:00.664395 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:28:00.664404 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:28:00.664412 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:28:00.664419 | orchestrator | 2026-01-30 05:28:00.664427 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-30 05:28:00.664436 | orchestrator | Friday 30 January 2026 05:27:46 +0000 (0:00:02.924) 0:08:32.722 ******** 2026-01-30 05:28:00.664443 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664451 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664459 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664467 | orchestrator | 2026-01-30 05:28:00.664475 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-30 05:28:00.664483 | orchestrator | Friday 30 January 2026 05:27:48 +0000 (0:00:01.389) 0:08:34.111 ******** 2026-01-30 05:28:00.664491 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664501 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664510 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664520 | orchestrator | 2026-01-30 05:28:00.664529 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-30 05:28:00.664538 | orchestrator | Friday 30 January 2026 05:27:49 +0000 (0:00:01.297) 0:08:35.409 ******** 2026-01-30 05:28:00.664551 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664562 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664571 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664580 | orchestrator | 2026-01-30 05:28:00.664589 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-30 05:28:00.664599 | orchestrator | Friday 30 January 2026 05:27:51 +0000 (0:00:01.704) 0:08:37.114 ******** 2026-01-30 05:28:00.664608 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664646 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664655 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664665 | orchestrator | 2026-01-30 05:28:00.664674 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-30 05:28:00.664683 | orchestrator | Friday 30 January 2026 05:27:52 +0000 (0:00:01.378) 0:08:38.492 ******** 2026-01-30 05:28:00.664693 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:00.664702 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:28:00.664711 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:28:00.664721 | orchestrator | 2026-01-30 05:28:00.664731 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-01-30 05:28:00.664740 | orchestrator | Friday 30 January 2026 05:27:53 +0000 (0:00:01.474) 0:08:39.967 ******** 2026-01-30 05:28:00.664749 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:28:00.664760 | orchestrator | 2026-01-30 05:28:00.664769 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-01-30 05:28:00.664778 | orchestrator | Friday 30 January 2026 05:27:56 +0000 (0:00:02.586) 0:08:42.554 ******** 2026-01-30 05:28:00.664789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-30 05:28:00.664813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-30 05:28:04.794731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-30 05:28:04.794870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:28:04.794922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:28:04.794947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-30 05:28:04.794969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:28:04.795023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:28:04.795073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-30 05:28:04.795096 | orchestrator | 2026-01-30 05:28:04.795118 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-01-30 05:28:04.795137 | orchestrator | Friday 30 January 2026 05:28:00 +0000 (0:00:04.128) 0:08:46.683 ******** 2026-01-30 05:28:04.795159 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:28:04.795182 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:28:04.795202 | orchestrator | } 2026-01-30 05:28:04.795223 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:28:04.795243 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:28:04.795263 | orchestrator | } 2026-01-30 05:28:04.795284 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:28:04.795306 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:28:04.795327 | orchestrator | } 2026-01-30 05:28:04.795347 | orchestrator | 2026-01-30 05:28:04.795367 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:28:04.795386 | orchestrator | Friday 30 January 2026 05:28:02 +0000 (0:00:01.370) 0:08:48.053 ******** 2026-01-30 05:28:04.795408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-30 05:28:04.795439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:28:04.795460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:28:04.795495 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:28:04.795516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-30 05:28:04.795537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:28:04.795571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:30:05.414894 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.414988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-30 05:30:05.415014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-30 05:30:05.415022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-30 05:30:05.415031 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415055 | orchestrator | 2026-01-30 05:30:05.415064 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-30 05:30:05.415070 | orchestrator | Friday 30 January 2026 05:28:04 +0000 (0:00:02.756) 0:08:50.810 ******** 2026-01-30 05:30:05.415074 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415079 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415083 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415087 | orchestrator | 2026-01-30 05:30:05.415091 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-30 05:30:05.415095 | orchestrator | Friday 30 January 2026 05:28:06 +0000 (0:00:01.726) 0:08:52.537 ******** 2026-01-30 05:30:05.415098 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415103 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415106 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415110 | orchestrator | 2026-01-30 05:30:05.415114 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-30 05:30:05.415117 | orchestrator | Friday 30 January 2026 05:28:07 +0000 (0:00:01.373) 0:08:53.911 ******** 2026-01-30 05:30:05.415121 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415125 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415129 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415133 | orchestrator | 2026-01-30 05:30:05.415139 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-30 05:30:05.415144 | orchestrator | Friday 30 January 2026 05:28:15 +0000 (0:00:07.137) 0:09:01.048 ******** 2026-01-30 05:30:05.415150 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415157 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415163 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415171 | orchestrator | 2026-01-30 05:30:05.415175 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-30 05:30:05.415178 | orchestrator | Friday 30 January 2026 05:28:22 +0000 (0:00:07.482) 0:09:08.531 ******** 2026-01-30 05:30:05.415182 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415186 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415190 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415193 | orchestrator | 2026-01-30 05:30:05.415197 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-30 05:30:05.415201 | orchestrator | Friday 30 January 2026 05:28:29 +0000 (0:00:07.076) 0:09:15.607 ******** 2026-01-30 05:30:05.415204 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415208 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415212 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415216 | orchestrator | 2026-01-30 05:30:05.415219 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-30 05:30:05.415223 | orchestrator | Friday 30 January 2026 05:28:37 +0000 (0:00:07.766) 0:09:23.373 ******** 2026-01-30 05:30:05.415227 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415230 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415235 | orchestrator | 2026-01-30 05:30:05.415238 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-30 05:30:05.415242 | orchestrator | Friday 30 January 2026 05:28:41 +0000 (0:00:03.710) 0:09:27.084 ******** 2026-01-30 05:30:05.415246 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415250 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415253 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415257 | orchestrator | 2026-01-30 05:30:05.415271 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-30 05:30:05.415276 | orchestrator | Friday 30 January 2026 05:28:53 +0000 (0:00:12.802) 0:09:39.886 ******** 2026-01-30 05:30:05.415279 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415283 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415287 | orchestrator | 2026-01-30 05:30:05.415291 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-30 05:30:05.415294 | orchestrator | Friday 30 January 2026 05:28:58 +0000 (0:00:04.581) 0:09:44.468 ******** 2026-01-30 05:30:05.415303 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:05.415306 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:30:05.415310 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:30:05.415314 | orchestrator | 2026-01-30 05:30:05.415318 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-30 05:30:05.415321 | orchestrator | Friday 30 January 2026 05:29:05 +0000 (0:00:07.272) 0:09:51.741 ******** 2026-01-30 05:30:05.415325 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415329 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415333 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415336 | orchestrator | 2026-01-30 05:30:05.415340 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-30 05:30:05.415344 | orchestrator | Friday 30 January 2026 05:29:12 +0000 (0:00:06.932) 0:09:58.673 ******** 2026-01-30 05:30:05.415347 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415351 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415355 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415359 | orchestrator | 2026-01-30 05:30:05.415362 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-30 05:30:05.415366 | orchestrator | Friday 30 January 2026 05:29:19 +0000 (0:00:06.872) 0:10:05.546 ******** 2026-01-30 05:30:05.415370 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415373 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415377 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415381 | orchestrator | 2026-01-30 05:30:05.415389 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-30 05:30:05.415392 | orchestrator | Friday 30 January 2026 05:29:26 +0000 (0:00:06.905) 0:10:12.451 ******** 2026-01-30 05:30:05.415396 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415400 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415404 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415407 | orchestrator | 2026-01-30 05:30:05.415411 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-01-30 05:30:05.415415 | orchestrator | Friday 30 January 2026 05:29:33 +0000 (0:00:07.266) 0:10:19.718 ******** 2026-01-30 05:30:05.415419 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415422 | orchestrator | 2026-01-30 05:30:05.415426 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-30 05:30:05.415430 | orchestrator | Friday 30 January 2026 05:29:37 +0000 (0:00:03.618) 0:10:23.336 ******** 2026-01-30 05:30:05.415434 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415437 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415441 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415445 | orchestrator | 2026-01-30 05:30:05.415449 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-01-30 05:30:05.415453 | orchestrator | Friday 30 January 2026 05:29:49 +0000 (0:00:12.407) 0:10:35.744 ******** 2026-01-30 05:30:05.415458 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415462 | orchestrator | 2026-01-30 05:30:05.415466 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-30 05:30:05.415471 | orchestrator | Friday 30 January 2026 05:29:54 +0000 (0:00:04.638) 0:10:40.383 ******** 2026-01-30 05:30:05.415475 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:05.415480 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:05.415484 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:30:05.415489 | orchestrator | 2026-01-30 05:30:05.415493 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-30 05:30:05.415498 | orchestrator | Friday 30 January 2026 05:30:01 +0000 (0:00:06.995) 0:10:47.379 ******** 2026-01-30 05:30:05.415502 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415506 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415511 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415515 | orchestrator | 2026-01-30 05:30:05.415519 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-30 05:30:05.415527 | orchestrator | Friday 30 January 2026 05:30:03 +0000 (0:00:01.874) 0:10:49.253 ******** 2026-01-30 05:30:05.415531 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:05.415536 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:05.415540 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:05.415544 | orchestrator | 2026-01-30 05:30:05.415548 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:30:05.415554 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-30 05:30:05.415560 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-30 05:30:05.415564 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-01-30 05:30:05.415568 | orchestrator | 2026-01-30 05:30:05.415573 | orchestrator | 2026-01-30 05:30:05.415577 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:30:05.415582 | orchestrator | Friday 30 January 2026 05:30:05 +0000 (0:00:02.176) 0:10:51.429 ******** 2026-01-30 05:30:05.415586 | orchestrator | =============================================================================== 2026-01-30 05:30:05.415591 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.80s 2026-01-30 05:30:05.415595 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.41s 2026-01-30 05:30:05.415599 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.78s 2026-01-30 05:30:05.415646 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.77s 2026-01-30 05:30:05.945473 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.48s 2026-01-30 05:30:05.945556 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.27s 2026-01-30 05:30:05.945564 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.27s 2026-01-30 05:30:05.945571 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.14s 2026-01-30 05:30:05.945577 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.08s 2026-01-30 05:30:05.945584 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.05s 2026-01-30 05:30:05.945590 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.00s 2026-01-30 05:30:05.945597 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.97s 2026-01-30 05:30:05.945650 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.93s 2026-01-30 05:30:05.945658 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.91s 2026-01-30 05:30:05.945664 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.87s 2026-01-30 05:30:05.945671 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.19s 2026-01-30 05:30:05.945678 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.79s 2026-01-30 05:30:05.945684 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.36s 2026-01-30 05:30:05.945691 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.31s 2026-01-30 05:30:05.945715 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.15s 2026-01-30 05:30:06.145023 | orchestrator | + osism apply -a upgrade opensearch 2026-01-30 05:30:07.984663 | orchestrator | 2026-01-30 05:30:07 | INFO  | Task d4d61d4a-a831-48c9-8d00-3777bd3bfd3a (opensearch) was prepared for execution. 2026-01-30 05:30:07.984744 | orchestrator | 2026-01-30 05:30:07 | INFO  | It takes a moment until task d4d61d4a-a831-48c9-8d00-3777bd3bfd3a (opensearch) has been started and output is visible here. 2026-01-30 05:30:26.007751 | orchestrator | 2026-01-30 05:30:26.007931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:30:26.007959 | orchestrator | 2026-01-30 05:30:26.007979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:30:26.007997 | orchestrator | Friday 30 January 2026 05:30:14 +0000 (0:00:01.814) 0:00:01.814 ******** 2026-01-30 05:30:26.008015 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:26.008034 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:26.008054 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:26.008072 | orchestrator | 2026-01-30 05:30:26.008092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:30:26.008110 | orchestrator | Friday 30 January 2026 05:30:15 +0000 (0:00:01.744) 0:00:03.559 ******** 2026-01-30 05:30:26.008129 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-30 05:30:26.008146 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-30 05:30:26.008164 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-30 05:30:26.008183 | orchestrator | 2026-01-30 05:30:26.008202 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-30 05:30:26.008219 | orchestrator | 2026-01-30 05:30:26.008237 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 05:30:26.008256 | orchestrator | Friday 30 January 2026 05:30:17 +0000 (0:00:01.670) 0:00:05.229 ******** 2026-01-30 05:30:26.008274 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:30:26.008292 | orchestrator | 2026-01-30 05:30:26.008311 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-30 05:30:26.008330 | orchestrator | Friday 30 January 2026 05:30:19 +0000 (0:00:02.348) 0:00:07.577 ******** 2026-01-30 05:30:26.008349 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 05:30:26.008368 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 05:30:26.008387 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-30 05:30:26.008405 | orchestrator | 2026-01-30 05:30:26.008425 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-30 05:30:26.008445 | orchestrator | Friday 30 January 2026 05:30:22 +0000 (0:00:02.252) 0:00:09.830 ******** 2026-01-30 05:30:26.008470 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:26.008494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:26.008660 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:26.008697 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:26.008721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:26.008750 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:26.008783 | orchestrator | 2026-01-30 05:30:26.008802 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 05:30:26.008820 | orchestrator | Friday 30 January 2026 05:30:24 +0000 (0:00:02.284) 0:00:12.115 ******** 2026-01-30 05:30:26.008839 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:30:26.008857 | orchestrator | 2026-01-30 05:30:26.008891 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-30 05:30:31.241256 | orchestrator | Friday 30 January 2026 05:30:25 +0000 (0:00:01.579) 0:00:13.695 ******** 2026-01-30 05:30:31.241372 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:31.241393 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:31.241406 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:31.241458 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:31.241494 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:31.241509 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:31.241521 | orchestrator | 2026-01-30 05:30:31.241533 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-30 05:30:31.241545 | orchestrator | Friday 30 January 2026 05:30:29 +0000 (0:00:03.424) 0:00:17.120 ******** 2026-01-30 05:30:31.241658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:31.241700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:33.007451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:33.007524 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:33.007534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:33.007557 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:33.007565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:33.007644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:33.007654 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:33.007660 | orchestrator | 2026-01-30 05:30:33.007667 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-30 05:30:33.007675 | orchestrator | Friday 30 January 2026 05:30:31 +0000 (0:00:01.807) 0:00:18.927 ******** 2026-01-30 05:30:33.007682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:33.007690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:33.007703 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:30:33.007714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:33.007725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:36.753482 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:30:36.753638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:30:36.753656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:30:36.753682 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:30:36.753689 | orchestrator | 2026-01-30 05:30:36.753696 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-30 05:30:36.753704 | orchestrator | Friday 30 January 2026 05:30:32 +0000 (0:00:01.763) 0:00:20.690 ******** 2026-01-30 05:30:36.753723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:36.753743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:36.753750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:36.753765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:36.753776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:36.753790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:50.108641 | orchestrator | 2026-01-30 05:30:50.108724 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-30 05:30:50.108731 | orchestrator | Friday 30 January 2026 05:30:36 +0000 (0:00:03.750) 0:00:24.441 ******** 2026-01-30 05:30:50.108736 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:50.108741 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:50.108757 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:50.108761 | orchestrator | 2026-01-30 05:30:50.108765 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-30 05:30:50.108769 | orchestrator | Friday 30 January 2026 05:30:40 +0000 (0:00:03.497) 0:00:27.938 ******** 2026-01-30 05:30:50.108773 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:30:50.108777 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:30:50.108781 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:30:50.108784 | orchestrator | 2026-01-30 05:30:50.108788 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-01-30 05:30:50.108792 | orchestrator | Friday 30 January 2026 05:30:43 +0000 (0:00:02.905) 0:00:30.844 ******** 2026-01-30 05:30:50.108798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:50.108814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:50.108818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-01-30 05:30:50.108834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:50.108845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:50.108853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-01-30 05:30:50.108860 | orchestrator | 2026-01-30 05:30:50.108867 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-01-30 05:30:50.108874 | orchestrator | Friday 30 January 2026 05:30:46 +0000 (0:00:03.686) 0:00:34.530 ******** 2026-01-30 05:30:50.108880 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:30:50.108886 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:30:50.108892 | orchestrator | } 2026-01-30 05:30:50.108898 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:30:50.108914 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:30:50.108920 | orchestrator | } 2026-01-30 05:30:50.108926 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:30:50.108932 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:30:50.108937 | orchestrator | } 2026-01-30 05:30:50.108944 | orchestrator | 2026-01-30 05:30:50.108950 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:30:50.108955 | orchestrator | Friday 30 January 2026 05:30:48 +0000 (0:00:01.319) 0:00:35.849 ******** 2026-01-30 05:30:50.108974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:34:04.255397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:34:04.255483 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:34:04.255502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:34:04.255508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:34:04.255525 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:34:04.255542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-01-30 05:34:04.255547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-01-30 05:34:04.255552 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:34:04.255556 | orchestrator | 2026-01-30 05:34:04.255560 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 05:34:04.255565 | orchestrator | Friday 30 January 2026 05:30:50 +0000 (0:00:01.945) 0:00:37.795 ******** 2026-01-30 05:34:04.255569 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:34:04.255573 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:34:04.255577 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:34:04.255581 | orchestrator | 2026-01-30 05:34:04.255585 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 05:34:04.255588 | orchestrator | Friday 30 January 2026 05:30:51 +0000 (0:00:01.511) 0:00:39.306 ******** 2026-01-30 05:34:04.255592 | orchestrator | 2026-01-30 05:34:04.255596 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 05:34:04.255600 | orchestrator | Friday 30 January 2026 05:30:52 +0000 (0:00:00.429) 0:00:39.736 ******** 2026-01-30 05:34:04.255604 | orchestrator | 2026-01-30 05:34:04.255610 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-30 05:34:04.255614 | orchestrator | Friday 30 January 2026 05:30:52 +0000 (0:00:00.437) 0:00:40.173 ******** 2026-01-30 05:34:04.255618 | orchestrator | 2026-01-30 05:34:04.255622 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-30 05:34:04.255626 | orchestrator | Friday 30 January 2026 05:30:53 +0000 (0:00:00.776) 0:00:40.949 ******** 2026-01-30 05:34:04.255637 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:04.255641 | orchestrator | 2026-01-30 05:34:04.255645 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-30 05:34:04.255649 | orchestrator | Friday 30 January 2026 05:30:56 +0000 (0:00:03.708) 0:00:44.658 ******** 2026-01-30 05:34:04.255653 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:04.255657 | orchestrator | 2026-01-30 05:34:04.255661 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-30 05:34:04.255664 | orchestrator | Friday 30 January 2026 05:31:07 +0000 (0:00:10.241) 0:00:54.900 ******** 2026-01-30 05:34:04.255668 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:34:04.255672 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:34:04.255676 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:34:04.255680 | orchestrator | 2026-01-30 05:34:04.255684 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-30 05:34:04.255687 | orchestrator | Friday 30 January 2026 05:32:16 +0000 (0:01:09.137) 0:02:04.037 ******** 2026-01-30 05:34:04.255691 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:34:04.255695 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:34:04.255699 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:34:04.255703 | orchestrator | 2026-01-30 05:34:04.255706 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-30 05:34:04.255710 | orchestrator | Friday 30 January 2026 05:33:54 +0000 (0:01:37.892) 0:03:41.930 ******** 2026-01-30 05:34:04.255715 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:34:04.255718 | orchestrator | 2026-01-30 05:34:04.255722 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-30 05:34:04.255726 | orchestrator | Friday 30 January 2026 05:33:55 +0000 (0:00:01.649) 0:03:43.579 ******** 2026-01-30 05:34:04.255730 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:04.255734 | orchestrator | 2026-01-30 05:34:04.255737 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-30 05:34:04.255741 | orchestrator | Friday 30 January 2026 05:33:59 +0000 (0:00:03.486) 0:03:47.066 ******** 2026-01-30 05:34:04.255745 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:04.255749 | orchestrator | 2026-01-30 05:34:04.255752 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-30 05:34:04.255756 | orchestrator | Friday 30 January 2026 05:34:03 +0000 (0:00:03.709) 0:03:50.776 ******** 2026-01-30 05:34:04.255760 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:34:04.255764 | orchestrator | 2026-01-30 05:34:04.255768 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-30 05:34:04.255775 | orchestrator | Friday 30 January 2026 05:34:04 +0000 (0:00:01.164) 0:03:51.940 ******** 2026-01-30 05:34:06.415421 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:34:06.415542 | orchestrator | 2026-01-30 05:34:06.415560 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:34:06.415573 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:34:06.415585 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:34:06.415595 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:34:06.415604 | orchestrator | 2026-01-30 05:34:06.415614 | orchestrator | 2026-01-30 05:34:06.415624 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:34:06.415634 | orchestrator | Friday 30 January 2026 05:34:06 +0000 (0:00:01.862) 0:03:53.803 ******** 2026-01-30 05:34:06.415643 | orchestrator | =============================================================================== 2026-01-30 05:34:06.415653 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 97.89s 2026-01-30 05:34:06.415689 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.14s 2026-01-30 05:34:06.415699 | orchestrator | opensearch : Perform a flush ------------------------------------------- 10.24s 2026-01-30 05:34:06.415708 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.75s 2026-01-30 05:34:06.415718 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.71s 2026-01-30 05:34:06.415727 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.71s 2026-01-30 05:34:06.415737 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.69s 2026-01-30 05:34:06.415747 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.50s 2026-01-30 05:34:06.415757 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.49s 2026-01-30 05:34:06.415766 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.42s 2026-01-30 05:34:06.415776 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.91s 2026-01-30 05:34:06.415785 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.35s 2026-01-30 05:34:06.415795 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.29s 2026-01-30 05:34:06.415818 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.25s 2026-01-30 05:34:06.415828 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.95s 2026-01-30 05:34:06.415838 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.86s 2026-01-30 05:34:06.415847 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.81s 2026-01-30 05:34:06.415858 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.76s 2026-01-30 05:34:06.415867 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-01-30 05:34:06.415877 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.67s 2026-01-30 05:34:06.692710 | orchestrator | + osism apply -a upgrade memcached 2026-01-30 05:34:08.705753 | orchestrator | 2026-01-30 05:34:08 | INFO  | Task 4c3b3f12-2097-4bd1-a914-32d7d55db4fc (memcached) was prepared for execution. 2026-01-30 05:34:08.705855 | orchestrator | 2026-01-30 05:34:08 | INFO  | It takes a moment until task 4c3b3f12-2097-4bd1-a914-32d7d55db4fc (memcached) has been started and output is visible here. 2026-01-30 05:34:40.792000 | orchestrator | 2026-01-30 05:34:40.792120 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:34:40.792138 | orchestrator | 2026-01-30 05:34:40.792151 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:34:40.792162 | orchestrator | Friday 30 January 2026 05:34:14 +0000 (0:00:01.451) 0:00:01.451 ******** 2026-01-30 05:34:40.792173 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:40.792220 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:34:40.792231 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:34:40.792242 | orchestrator | 2026-01-30 05:34:40.792254 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:34:40.792265 | orchestrator | Friday 30 January 2026 05:34:16 +0000 (0:00:02.031) 0:00:03.483 ******** 2026-01-30 05:34:40.792276 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-30 05:34:40.792288 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-30 05:34:40.792299 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-30 05:34:40.792310 | orchestrator | 2026-01-30 05:34:40.792321 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-30 05:34:40.792333 | orchestrator | 2026-01-30 05:34:40.792344 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-30 05:34:40.792355 | orchestrator | Friday 30 January 2026 05:34:17 +0000 (0:00:01.510) 0:00:04.994 ******** 2026-01-30 05:34:40.792367 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:34:40.792401 | orchestrator | 2026-01-30 05:34:40.792413 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-30 05:34:40.792424 | orchestrator | Friday 30 January 2026 05:34:20 +0000 (0:00:02.288) 0:00:07.283 ******** 2026-01-30 05:34:40.792435 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-01-30 05:34:40.792446 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-01-30 05:34:40.792457 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-01-30 05:34:40.792467 | orchestrator | 2026-01-30 05:34:40.792478 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-30 05:34:40.792489 | orchestrator | Friday 30 January 2026 05:34:22 +0000 (0:00:01.878) 0:00:09.161 ******** 2026-01-30 05:34:40.792500 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-01-30 05:34:40.792538 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-01-30 05:34:40.792567 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-01-30 05:34:40.792580 | orchestrator | 2026-01-30 05:34:40.792592 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-01-30 05:34:40.792618 | orchestrator | Friday 30 January 2026 05:34:24 +0000 (0:00:02.594) 0:00:11.756 ******** 2026-01-30 05:34:40.792635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:34:40.792666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:34:40.792698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-30 05:34:40.792711 | orchestrator | 2026-01-30 05:34:40.792722 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-01-30 05:34:40.792733 | orchestrator | Friday 30 January 2026 05:34:26 +0000 (0:00:02.167) 0:00:13.924 ******** 2026-01-30 05:34:40.792744 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:34:40.792755 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:34:40.792776 | orchestrator | } 2026-01-30 05:34:40.792787 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:34:40.792798 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:34:40.792809 | orchestrator | } 2026-01-30 05:34:40.792820 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:34:40.792830 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:34:40.792841 | orchestrator | } 2026-01-30 05:34:40.792852 | orchestrator | 2026-01-30 05:34:40.792863 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:34:40.792873 | orchestrator | Friday 30 January 2026 05:34:28 +0000 (0:00:01.295) 0:00:15.219 ******** 2026-01-30 05:34:40.792885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:34:40.792897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:34:40.792909 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:34:40.792920 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:34:40.792931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-30 05:34:40.792942 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:34:40.792953 | orchestrator | 2026-01-30 05:34:40.792964 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-30 05:34:40.792984 | orchestrator | Friday 30 January 2026 05:34:29 +0000 (0:00:01.860) 0:00:17.079 ******** 2026-01-30 05:34:40.793003 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:34:40.793022 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:34:40.793047 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:34:40.793071 | orchestrator | 2026-01-30 05:34:40.793089 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:34:40.793108 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:34:40.793138 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:34:40.793157 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:34:40.793175 | orchestrator | 2026-01-30 05:34:40.793228 | orchestrator | 2026-01-30 05:34:40.793246 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:34:40.793277 | orchestrator | Friday 30 January 2026 05:34:40 +0000 (0:00:10.854) 0:00:27.933 ******** 2026-01-30 05:34:41.083089 | orchestrator | =============================================================================== 2026-01-30 05:34:41.083293 | orchestrator | memcached : Restart memcached container -------------------------------- 10.85s 2026-01-30 05:34:41.083325 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.59s 2026-01-30 05:34:41.083344 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.29s 2026-01-30 05:34:41.083356 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.17s 2026-01-30 05:34:41.083367 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.03s 2026-01-30 05:34:41.083378 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.88s 2026-01-30 05:34:41.083389 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.86s 2026-01-30 05:34:41.083400 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.51s 2026-01-30 05:34:41.083415 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.30s 2026-01-30 05:34:41.359579 | orchestrator | + osism apply -a upgrade redis 2026-01-30 05:34:43.374829 | orchestrator | 2026-01-30 05:34:43 | INFO  | Task a1054904-e713-4952-bb44-488c177ccb80 (redis) was prepared for execution. 2026-01-30 05:34:43.374931 | orchestrator | 2026-01-30 05:34:43 | INFO  | It takes a moment until task a1054904-e713-4952-bb44-488c177ccb80 (redis) has been started and output is visible here. 2026-01-30 05:34:54.660132 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:34:54.660315 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:34:54.660348 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:34:54.660359 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:34:54.660381 | orchestrator | 2026-01-30 05:34:54.660393 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:34:54.660403 | orchestrator | 2026-01-30 05:34:54.660414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:34:54.660425 | orchestrator | Friday 30 January 2026 05:34:48 +0000 (0:00:01.063) 0:00:01.063 ******** 2026-01-30 05:34:54.660436 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:34:54.660448 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:34:54.660460 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:34:54.660471 | orchestrator | 2026-01-30 05:34:54.660482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:34:54.660493 | orchestrator | Friday 30 January 2026 05:34:49 +0000 (0:00:00.896) 0:00:01.959 ******** 2026-01-30 05:34:54.660503 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-30 05:34:54.660514 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-30 05:34:54.660531 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-30 05:34:54.660550 | orchestrator | 2026-01-30 05:34:54.660567 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-30 05:34:54.660586 | orchestrator | 2026-01-30 05:34:54.660604 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-30 05:34:54.660651 | orchestrator | Friday 30 January 2026 05:34:50 +0000 (0:00:00.853) 0:00:02.812 ******** 2026-01-30 05:34:54.660674 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:34:54.660697 | orchestrator | 2026-01-30 05:34:54.660718 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-30 05:34:54.660736 | orchestrator | Friday 30 January 2026 05:34:51 +0000 (0:00:00.949) 0:00:03.762 ******** 2026-01-30 05:34:54.660771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660794 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660823 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660894 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.660924 | orchestrator | 2026-01-30 05:34:54.660941 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-30 05:34:54.660957 | orchestrator | Friday 30 January 2026 05:34:52 +0000 (0:00:01.341) 0:00:05.103 ******** 2026-01-30 05:34:54.660983 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.661002 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.661020 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.661038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:54.661069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.601836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.601931 | orchestrator | 2026-01-30 05:34:59.601941 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-30 05:34:59.601948 | orchestrator | Friday 30 January 2026 05:34:54 +0000 (0:00:02.092) 0:00:07.196 ******** 2026-01-30 05:34:59.601965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.601973 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.601983 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.601992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602083 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602095 | orchestrator | 2026-01-30 05:34:59.602106 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-01-30 05:34:59.602112 | orchestrator | Friday 30 January 2026 05:34:57 +0000 (0:00:02.824) 0:00:10.021 ******** 2026-01-30 05:34:59.602118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:34:59.602163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-30 05:35:22.758301 | orchestrator | 2026-01-30 05:35:22.758429 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-01-30 05:35:22.758442 | orchestrator | Friday 30 January 2026 05:34:59 +0000 (0:00:02.119) 0:00:12.140 ******** 2026-01-30 05:35:22.758452 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:35:22.758460 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:35:22.758468 | orchestrator | } 2026-01-30 05:35:22.758475 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:35:22.758483 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:35:22.758490 | orchestrator | } 2026-01-30 05:35:22.758498 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:35:22.758510 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:35:22.758522 | orchestrator | } 2026-01-30 05:35:22.758534 | orchestrator | 2026-01-30 05:35:22.758547 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:35:22.758558 | orchestrator | Friday 30 January 2026 05:35:00 +0000 (0:00:00.541) 0:00:12.681 ******** 2026-01-30 05:35:22.758573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758650 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-30 05:35:22.758665 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-30 05:35:22.758690 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:35:22.758704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758758 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:22.758786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-01-30 05:35:22.758803 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:22.758810 | orchestrator | 2026-01-30 05:35:22.758817 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 05:35:22.758829 | orchestrator | Friday 30 January 2026 05:35:01 +0000 (0:00:01.014) 0:00:13.696 ******** 2026-01-30 05:35:22.758837 | orchestrator | 2026-01-30 05:35:22.758844 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 05:35:22.758851 | orchestrator | Friday 30 January 2026 05:35:01 +0000 (0:00:00.084) 0:00:13.780 ******** 2026-01-30 05:35:22.758858 | orchestrator | 2026-01-30 05:35:22.758866 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-30 05:35:22.758873 | orchestrator | Friday 30 January 2026 05:35:01 +0000 (0:00:00.070) 0:00:13.851 ******** 2026-01-30 05:35:22.758881 | orchestrator | 2026-01-30 05:35:22.758888 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-30 05:35:22.758895 | orchestrator | Friday 30 January 2026 05:35:01 +0000 (0:00:00.073) 0:00:13.924 ******** 2026-01-30 05:35:22.758902 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:35:22.758909 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:35:22.758917 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:35:22.758924 | orchestrator | 2026-01-30 05:35:22.758931 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-30 05:35:22.758938 | orchestrator | Friday 30 January 2026 05:35:11 +0000 (0:00:10.039) 0:00:23.964 ******** 2026-01-30 05:35:22.758945 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:35:22.758959 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:35:22.758966 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:35:22.758973 | orchestrator | 2026-01-30 05:35:22.758980 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:35:22.758989 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:35:22.758998 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:35:22.759005 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:35:22.759012 | orchestrator | 2026-01-30 05:35:22.759019 | orchestrator | 2026-01-30 05:35:22.759026 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:35:22.759033 | orchestrator | Friday 30 January 2026 05:35:22 +0000 (0:00:10.871) 0:00:34.836 ******** 2026-01-30 05:35:22.759040 | orchestrator | =============================================================================== 2026-01-30 05:35:22.759047 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.87s 2026-01-30 05:35:22.759055 | orchestrator | redis : Restart redis container ---------------------------------------- 10.04s 2026-01-30 05:35:22.759062 | orchestrator | redis : Copying over redis config files --------------------------------- 2.82s 2026-01-30 05:35:22.759069 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.12s 2026-01-30 05:35:22.759076 | orchestrator | redis : Copying over default config.json files -------------------------- 2.09s 2026-01-30 05:35:22.759083 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.34s 2026-01-30 05:35:22.759090 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.01s 2026-01-30 05:35:22.759097 | orchestrator | redis : include_tasks --------------------------------------------------- 0.95s 2026-01-30 05:35:22.759104 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.90s 2026-01-30 05:35:22.759111 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-01-30 05:35:22.759118 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.54s 2026-01-30 05:35:22.759125 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-01-30 05:35:23.039886 | orchestrator | + osism apply -a upgrade mariadb 2026-01-30 05:35:25.046773 | orchestrator | 2026-01-30 05:35:25 | INFO  | Task 9cafe6cb-4982-4685-8115-cc1b09252416 (mariadb) was prepared for execution. 2026-01-30 05:35:25.046844 | orchestrator | 2026-01-30 05:35:25 | INFO  | It takes a moment until task 9cafe6cb-4982-4685-8115-cc1b09252416 (mariadb) has been started and output is visible here. 2026-01-30 05:35:38.871467 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:35:38.871568 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:35:38.871588 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:35:38.871595 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:35:38.871610 | orchestrator | 2026-01-30 05:35:38.871618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:35:38.871626 | orchestrator | 2026-01-30 05:35:38.871634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:35:38.871641 | orchestrator | Friday 30 January 2026 05:35:30 +0000 (0:00:01.285) 0:00:01.285 ******** 2026-01-30 05:35:38.871648 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:35:38.871657 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:35:38.871681 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:35:38.871690 | orchestrator | 2026-01-30 05:35:38.871703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:35:38.871715 | orchestrator | Friday 30 January 2026 05:35:31 +0000 (0:00:01.005) 0:00:02.291 ******** 2026-01-30 05:35:38.871728 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-30 05:35:38.871754 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-30 05:35:38.871769 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-30 05:35:38.871785 | orchestrator | 2026-01-30 05:35:38.871797 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-30 05:35:38.871809 | orchestrator | 2026-01-30 05:35:38.871821 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-30 05:35:38.871833 | orchestrator | Friday 30 January 2026 05:35:32 +0000 (0:00:01.011) 0:00:03.302 ******** 2026-01-30 05:35:38.871844 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:35:38.871857 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 05:35:38.871869 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 05:35:38.871881 | orchestrator | 2026-01-30 05:35:38.871890 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 05:35:38.871898 | orchestrator | Friday 30 January 2026 05:35:32 +0000 (0:00:00.289) 0:00:03.592 ******** 2026-01-30 05:35:38.871905 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:35:38.871913 | orchestrator | 2026-01-30 05:35:38.871921 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-30 05:35:38.871928 | orchestrator | Friday 30 January 2026 05:35:33 +0000 (0:00:00.871) 0:00:04.464 ******** 2026-01-30 05:35:38.871940 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:38.871977 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:38.872000 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:38.872009 | orchestrator | 2026-01-30 05:35:38.872018 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-30 05:35:38.872026 | orchestrator | Friday 30 January 2026 05:35:37 +0000 (0:00:03.530) 0:00:07.994 ******** 2026-01-30 05:35:38.872035 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:38.872043 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:38.872051 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:35:38.872059 | orchestrator | 2026-01-30 05:35:38.872068 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-30 05:35:38.872076 | orchestrator | Friday 30 January 2026 05:35:37 +0000 (0:00:00.651) 0:00:08.646 ******** 2026-01-30 05:35:38.872089 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:38.872097 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:38.872105 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:35:38.872113 | orchestrator | 2026-01-30 05:35:38.872122 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-30 05:35:38.872135 | orchestrator | Friday 30 January 2026 05:35:38 +0000 (0:00:01.203) 0:00:09.849 ******** 2026-01-30 05:35:51.489507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:51.489634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:51.489711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:51.489736 | orchestrator | 2026-01-30 05:35:51.489756 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-30 05:35:51.489775 | orchestrator | Friday 30 January 2026 05:35:42 +0000 (0:00:03.214) 0:00:13.064 ******** 2026-01-30 05:35:51.489793 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:51.489812 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:51.489828 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:35:51.489845 | orchestrator | 2026-01-30 05:35:51.489861 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-30 05:35:51.489880 | orchestrator | Friday 30 January 2026 05:35:43 +0000 (0:00:01.118) 0:00:14.182 ******** 2026-01-30 05:35:51.489897 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:35:51.489916 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:35:51.489935 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:35:51.489954 | orchestrator | 2026-01-30 05:35:51.489975 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 05:35:51.489995 | orchestrator | Friday 30 January 2026 05:35:47 +0000 (0:00:04.239) 0:00:18.421 ******** 2026-01-30 05:35:51.490010 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:35:51.490093 | orchestrator | 2026-01-30 05:35:51.490106 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-30 05:35:51.490118 | orchestrator | Friday 30 January 2026 05:35:48 +0000 (0:00:01.279) 0:00:19.700 ******** 2026-01-30 05:35:51.490145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:53.688568 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:35:53.688692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:53.688717 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:53.688730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:53.688763 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:53.688775 | orchestrator | 2026-01-30 05:35:53.688788 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-30 05:35:53.688801 | orchestrator | Friday 30 January 2026 05:35:51 +0000 (0:00:02.767) 0:00:22.468 ******** 2026-01-30 05:35:53.688838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:53.688851 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:53.688862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:53.688882 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:35:53.688909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:59.435559 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:59.435667 | orchestrator | 2026-01-30 05:35:59.435682 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-30 05:35:59.435692 | orchestrator | Friday 30 January 2026 05:35:53 +0000 (0:00:02.201) 0:00:24.669 ******** 2026-01-30 05:35:59.435705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:59.435737 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:35:59.435760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:59.435770 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:35:59.435797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:35:59.435817 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:35:59.435825 | orchestrator | 2026-01-30 05:35:59.435833 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-01-30 05:35:59.435846 | orchestrator | Friday 30 January 2026 05:35:56 +0000 (0:00:02.792) 0:00:27.462 ******** 2026-01-30 05:35:59.435866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:35:59.435892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:36:02.526418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-30 05:36:02.526627 | orchestrator | 2026-01-30 05:36:02.526644 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-01-30 05:36:02.526654 | orchestrator | Friday 30 January 2026 05:35:59 +0000 (0:00:02.957) 0:00:30.420 ******** 2026-01-30 05:36:02.526664 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:36:02.526674 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:36:02.526735 | orchestrator | } 2026-01-30 05:36:02.526747 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:36:02.526756 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:36:02.526765 | orchestrator | } 2026-01-30 05:36:02.526774 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:36:02.526782 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:36:02.526791 | orchestrator | } 2026-01-30 05:36:02.526800 | orchestrator | 2026-01-30 05:36:02.526809 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:36:02.526818 | orchestrator | Friday 30 January 2026 05:35:59 +0000 (0:00:00.320) 0:00:30.740 ******** 2026-01-30 05:36:02.526846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:02.526876 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:02.526887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:02.526902 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:02.526911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:02.526934 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:02.526945 | orchestrator | 2026-01-30 05:36:02.526956 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-01-30 05:36:02.526977 | orchestrator | Friday 30 January 2026 05:36:02 +0000 (0:00:02.756) 0:00:33.497 ******** 2026-01-30 05:36:11.436922 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437059 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437071 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437080 | orchestrator | 2026-01-30 05:36:11.437089 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-01-30 05:36:11.437099 | orchestrator | Friday 30 January 2026 05:36:02 +0000 (0:00:00.313) 0:00:33.811 ******** 2026-01-30 05:36:11.437107 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437115 | orchestrator | 2026-01-30 05:36:11.437124 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-01-30 05:36:11.437132 | orchestrator | Friday 30 January 2026 05:36:02 +0000 (0:00:00.126) 0:00:33.937 ******** 2026-01-30 05:36:11.437140 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437148 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437155 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437163 | orchestrator | 2026-01-30 05:36:11.437171 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-01-30 05:36:11.437179 | orchestrator | Friday 30 January 2026 05:36:03 +0000 (0:00:00.333) 0:00:34.270 ******** 2026-01-30 05:36:11.437187 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437195 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437202 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437210 | orchestrator | 2026-01-30 05:36:11.437218 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-01-30 05:36:11.437226 | orchestrator | Friday 30 January 2026 05:36:03 +0000 (0:00:00.469) 0:00:34.739 ******** 2026-01-30 05:36:11.437234 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437242 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437249 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437257 | orchestrator | 2026-01-30 05:36:11.437265 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-01-30 05:36:11.437273 | orchestrator | Friday 30 January 2026 05:36:04 +0000 (0:00:00.332) 0:00:35.072 ******** 2026-01-30 05:36:11.437281 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437289 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437296 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437304 | orchestrator | 2026-01-30 05:36:11.437312 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-01-30 05:36:11.437347 | orchestrator | Friday 30 January 2026 05:36:04 +0000 (0:00:00.306) 0:00:35.379 ******** 2026-01-30 05:36:11.437358 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437372 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437385 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437397 | orchestrator | 2026-01-30 05:36:11.437410 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-01-30 05:36:11.437423 | orchestrator | Friday 30 January 2026 05:36:04 +0000 (0:00:00.328) 0:00:35.708 ******** 2026-01-30 05:36:11.437436 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437448 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437461 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437474 | orchestrator | 2026-01-30 05:36:11.437541 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-01-30 05:36:11.437557 | orchestrator | Friday 30 January 2026 05:36:05 +0000 (0:00:00.559) 0:00:36.268 ******** 2026-01-30 05:36:11.437570 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:36:11.437584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:36:11.437598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:36:11.437610 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437624 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 05:36:11.437638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 05:36:11.437652 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 05:36:11.437666 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437674 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 05:36:11.437682 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 05:36:11.437690 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 05:36:11.437697 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437705 | orchestrator | 2026-01-30 05:36:11.437713 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-01-30 05:36:11.437720 | orchestrator | Friday 30 January 2026 05:36:05 +0000 (0:00:00.388) 0:00:36.656 ******** 2026-01-30 05:36:11.437728 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437736 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437743 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437751 | orchestrator | 2026-01-30 05:36:11.437759 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-01-30 05:36:11.437767 | orchestrator | Friday 30 January 2026 05:36:06 +0000 (0:00:00.358) 0:00:37.014 ******** 2026-01-30 05:36:11.437774 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437782 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437790 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437797 | orchestrator | 2026-01-30 05:36:11.437805 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-01-30 05:36:11.437813 | orchestrator | Friday 30 January 2026 05:36:06 +0000 (0:00:00.612) 0:00:37.627 ******** 2026-01-30 05:36:11.437820 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437828 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437835 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437843 | orchestrator | 2026-01-30 05:36:11.437851 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-01-30 05:36:11.437859 | orchestrator | Friday 30 January 2026 05:36:06 +0000 (0:00:00.360) 0:00:37.988 ******** 2026-01-30 05:36:11.437867 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437875 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437882 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437890 | orchestrator | 2026-01-30 05:36:11.437898 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-01-30 05:36:11.437925 | orchestrator | Friday 30 January 2026 05:36:07 +0000 (0:00:00.371) 0:00:38.359 ******** 2026-01-30 05:36:11.437943 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437951 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.437959 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.437967 | orchestrator | 2026-01-30 05:36:11.437975 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-01-30 05:36:11.437983 | orchestrator | Friday 30 January 2026 05:36:07 +0000 (0:00:00.337) 0:00:38.696 ******** 2026-01-30 05:36:11.437991 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.437998 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.438006 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.438014 | orchestrator | 2026-01-30 05:36:11.438082 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-01-30 05:36:11.438090 | orchestrator | Friday 30 January 2026 05:36:08 +0000 (0:00:00.644) 0:00:39.341 ******** 2026-01-30 05:36:11.438098 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.438105 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.438113 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.438121 | orchestrator | 2026-01-30 05:36:11.438128 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-01-30 05:36:11.438136 | orchestrator | Friday 30 January 2026 05:36:08 +0000 (0:00:00.350) 0:00:39.692 ******** 2026-01-30 05:36:11.438144 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.438151 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:11.438159 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:11.438167 | orchestrator | 2026-01-30 05:36:11.438175 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-01-30 05:36:11.438182 | orchestrator | Friday 30 January 2026 05:36:09 +0000 (0:00:00.351) 0:00:40.043 ******** 2026-01-30 05:36:11.438212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:11.438225 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:11.438243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:14.843733 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:14.843908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:14.843931 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:14.843943 | orchestrator | 2026-01-30 05:36:14.843956 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-01-30 05:36:14.843969 | orchestrator | Friday 30 January 2026 05:36:11 +0000 (0:00:02.375) 0:00:42.418 ******** 2026-01-30 05:36:14.843980 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:14.843990 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:14.844001 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:14.844012 | orchestrator | 2026-01-30 05:36:14.844023 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-01-30 05:36:14.844059 | orchestrator | Friday 30 January 2026 05:36:11 +0000 (0:00:00.568) 0:00:42.987 ******** 2026-01-30 05:36:14.844095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:14.844109 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:36:14.844127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:14.844139 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:36:14.844151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-30 05:36:14.844172 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:36:14.844185 | orchestrator | 2026-01-30 05:36:14.844198 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-01-30 05:36:14.844211 | orchestrator | Friday 30 January 2026 05:36:14 +0000 (0:00:02.617) 0:00:45.605 ******** 2026-01-30 05:36:14.844230 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.114126 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.114256 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.114272 | orchestrator | 2026-01-30 05:38:11.114286 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-30 05:38:11.114299 | orchestrator | Friday 30 January 2026 05:36:15 +0000 (0:00:00.766) 0:00:46.372 ******** 2026-01-30 05:38:11.114310 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.114321 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.114332 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.114342 | orchestrator | 2026-01-30 05:38:11.114370 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-30 05:38:11.114383 | orchestrator | Friday 30 January 2026 05:36:15 +0000 (0:00:00.573) 0:00:46.945 ******** 2026-01-30 05:38:11.114394 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.114405 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.114416 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.114426 | orchestrator | 2026-01-30 05:38:11.114437 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-30 05:38:11.114448 | orchestrator | Friday 30 January 2026 05:36:16 +0000 (0:00:00.354) 0:00:47.300 ******** 2026-01-30 05:38:11.114459 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.114470 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.114481 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.114491 | orchestrator | 2026-01-30 05:38:11.114502 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-30 05:38:11.114513 | orchestrator | Friday 30 January 2026 05:36:17 +0000 (0:00:00.896) 0:00:48.196 ******** 2026-01-30 05:38:11.114524 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.114535 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.114572 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.114589 | orchestrator | 2026-01-30 05:38:11.114627 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-30 05:38:11.114648 | orchestrator | Friday 30 January 2026 05:36:18 +0000 (0:00:00.952) 0:00:49.149 ******** 2026-01-30 05:38:11.114686 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.114708 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.114728 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.114747 | orchestrator | 2026-01-30 05:38:11.114767 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-30 05:38:11.114780 | orchestrator | Friday 30 January 2026 05:36:19 +0000 (0:00:00.954) 0:00:50.103 ******** 2026-01-30 05:38:11.114794 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.114806 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.114818 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.114830 | orchestrator | 2026-01-30 05:38:11.114905 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-30 05:38:11.114918 | orchestrator | Friday 30 January 2026 05:36:19 +0000 (0:00:00.353) 0:00:50.457 ******** 2026-01-30 05:38:11.114930 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.114941 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.114952 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.114962 | orchestrator | 2026-01-30 05:38:11.114975 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-30 05:38:11.114994 | orchestrator | Friday 30 January 2026 05:36:19 +0000 (0:00:00.361) 0:00:50.819 ******** 2026-01-30 05:38:11.115011 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.115028 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.115046 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.115063 | orchestrator | 2026-01-30 05:38:11.115081 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-30 05:38:11.115098 | orchestrator | Friday 30 January 2026 05:36:21 +0000 (0:00:01.222) 0:00:52.042 ******** 2026-01-30 05:38:11.115115 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.115132 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.115149 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.115166 | orchestrator | 2026-01-30 05:38:11.115183 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-30 05:38:11.115202 | orchestrator | Friday 30 January 2026 05:36:21 +0000 (0:00:00.353) 0:00:52.395 ******** 2026-01-30 05:38:11.115219 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.115237 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.115255 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.115273 | orchestrator | 2026-01-30 05:38:11.115293 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-30 05:38:11.115312 | orchestrator | Friday 30 January 2026 05:36:21 +0000 (0:00:00.388) 0:00:52.783 ******** 2026-01-30 05:38:11.115330 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.115349 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.115368 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.115386 | orchestrator | 2026-01-30 05:38:11.115405 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-30 05:38:11.115422 | orchestrator | Friday 30 January 2026 05:36:24 +0000 (0:00:02.672) 0:00:55.456 ******** 2026-01-30 05:38:11.115440 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.115457 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.115475 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.115494 | orchestrator | 2026-01-30 05:38:11.115511 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-30 05:38:11.115529 | orchestrator | Friday 30 January 2026 05:36:25 +0000 (0:00:00.651) 0:00:56.108 ******** 2026-01-30 05:38:11.115547 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.115565 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.115584 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.115602 | orchestrator | 2026-01-30 05:38:11.115622 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-30 05:38:11.115665 | orchestrator | Friday 30 January 2026 05:36:25 +0000 (0:00:00.374) 0:00:56.483 ******** 2026-01-30 05:38:11.115679 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.115690 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.115701 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.115712 | orchestrator | 2026-01-30 05:38:11.115723 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 05:38:11.115734 | orchestrator | Friday 30 January 2026 05:36:26 +0000 (0:00:00.731) 0:00:57.215 ******** 2026-01-30 05:38:11.115745 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.115756 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.115767 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.115817 | orchestrator | 2026-01-30 05:38:11.115830 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-30 05:38:11.115872 | orchestrator | Friday 30 January 2026 05:36:26 +0000 (0:00:00.572) 0:00:57.787 ******** 2026-01-30 05:38:11.115883 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.115894 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-30 05:38:11.115904 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-30 05:38:11.115926 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.115937 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.115948 | orchestrator | 2026-01-30 05:38:11.115959 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-30 05:38:11.115969 | orchestrator | Friday 30 January 2026 05:36:27 +0000 (0:00:00.800) 0:00:58.588 ******** 2026-01-30 05:38:11.115980 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:38:11.115991 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:38:11.116005 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:38:11.116023 | orchestrator | 2026-01-30 05:38:11.116041 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-30 05:38:11.116058 | orchestrator | Friday 30 January 2026 05:36:28 +0000 (0:00:00.587) 0:00:59.176 ******** 2026-01-30 05:38:11.116074 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:11.116092 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.116109 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.116125 | orchestrator | 2026-01-30 05:38:11.116142 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-30 05:38:11.116159 | orchestrator | 2026-01-30 05:38:11.116176 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 05:38:11.116205 | orchestrator | Friday 30 January 2026 05:36:29 +0000 (0:00:00.851) 0:01:00.027 ******** 2026-01-30 05:38:11.116224 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:38:11.116244 | orchestrator | 2026-01-30 05:38:11.116263 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 05:38:11.116281 | orchestrator | Friday 30 January 2026 05:36:53 +0000 (0:00:24.036) 0:01:24.063 ******** 2026-01-30 05:38:11.116296 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.116307 | orchestrator | 2026-01-30 05:38:11.116318 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 05:38:11.116329 | orchestrator | Friday 30 January 2026 05:36:58 +0000 (0:00:05.650) 0:01:29.714 ******** 2026-01-30 05:38:11.116339 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:11.116350 | orchestrator | 2026-01-30 05:38:11.116361 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-30 05:38:11.116372 | orchestrator | 2026-01-30 05:38:11.116382 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 05:38:11.116393 | orchestrator | Friday 30 January 2026 05:37:01 +0000 (0:00:02.906) 0:01:32.620 ******** 2026-01-30 05:38:11.116404 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:38:11.116415 | orchestrator | 2026-01-30 05:38:11.116425 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 05:38:11.116446 | orchestrator | Friday 30 January 2026 05:37:26 +0000 (0:00:24.634) 0:01:57.255 ******** 2026-01-30 05:38:11.116457 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.116468 | orchestrator | 2026-01-30 05:38:11.116479 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 05:38:11.116490 | orchestrator | Friday 30 January 2026 05:37:31 +0000 (0:00:05.636) 0:02:02.891 ******** 2026-01-30 05:38:11.116500 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:11.116516 | orchestrator | 2026-01-30 05:38:11.116534 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-30 05:38:11.116559 | orchestrator | 2026-01-30 05:38:11.116585 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-30 05:38:11.116602 | orchestrator | Friday 30 January 2026 05:37:35 +0000 (0:00:03.164) 0:02:06.056 ******** 2026-01-30 05:38:11.116619 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:38:11.116636 | orchestrator | 2026-01-30 05:38:11.116654 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-30 05:38:11.116669 | orchestrator | Friday 30 January 2026 05:37:58 +0000 (0:00:23.427) 0:02:29.483 ******** 2026-01-30 05:38:11.116686 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.116704 | orchestrator | 2026-01-30 05:38:11.116722 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-30 05:38:11.116741 | orchestrator | Friday 30 January 2026 05:38:04 +0000 (0:00:05.630) 0:02:35.113 ******** 2026-01-30 05:38:11.116760 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-01-30 05:38:11.116778 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-30 05:38:11.116795 | orchestrator | mariadb_bootstrap_restart 2026-01-30 05:38:11.116806 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:11.116817 | orchestrator | 2026-01-30 05:38:11.116827 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-30 05:38:11.117016 | orchestrator | skipping: no hosts matched 2026-01-30 05:38:11.117039 | orchestrator | 2026-01-30 05:38:11.117050 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-30 05:38:11.117061 | orchestrator | skipping: no hosts matched 2026-01-30 05:38:11.117072 | orchestrator | 2026-01-30 05:38:11.117083 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-30 05:38:11.117093 | orchestrator | 2026-01-30 05:38:11.117104 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-30 05:38:11.117115 | orchestrator | Friday 30 January 2026 05:38:07 +0000 (0:00:03.342) 0:02:38.456 ******** 2026-01-30 05:38:11.117130 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:38:11.117153 | orchestrator | 2026-01-30 05:38:11.117270 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-30 05:38:11.117289 | orchestrator | Friday 30 January 2026 05:38:08 +0000 (0:00:01.212) 0:02:39.668 ******** 2026-01-30 05:38:11.117307 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:11.117325 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:11.117360 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:47.885138 | orchestrator | 2026-01-30 05:38:47.885255 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-30 05:38:47.885279 | orchestrator | Friday 30 January 2026 05:38:11 +0000 (0:00:02.420) 0:02:42.089 ******** 2026-01-30 05:38:47.885295 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:47.885310 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:47.885323 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:38:47.885339 | orchestrator | 2026-01-30 05:38:47.885355 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-30 05:38:47.885370 | orchestrator | Friday 30 January 2026 05:38:13 +0000 (0:00:02.405) 0:02:44.495 ******** 2026-01-30 05:38:47.885384 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:47.885398 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:47.885414 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:47.885459 | orchestrator | 2026-01-30 05:38:47.885470 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-30 05:38:47.885479 | orchestrator | Friday 30 January 2026 05:38:15 +0000 (0:00:02.424) 0:02:46.919 ******** 2026-01-30 05:38:47.885488 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:47.885496 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:47.885505 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:38:47.885513 | orchestrator | 2026-01-30 05:38:47.885522 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-01-30 05:38:47.885531 | orchestrator | Friday 30 January 2026 05:38:18 +0000 (0:00:02.388) 0:02:49.308 ******** 2026-01-30 05:38:47.885541 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:47.885556 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:47.885574 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:47.885595 | orchestrator | 2026-01-30 05:38:47.885609 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-01-30 05:38:47.885641 | orchestrator | Friday 30 January 2026 05:38:23 +0000 (0:00:05.106) 0:02:54.414 ******** 2026-01-30 05:38:47.885658 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:47.885673 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:47.885689 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:47.885704 | orchestrator | 2026-01-30 05:38:47.885718 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-01-30 05:38:47.885733 | orchestrator | Friday 30 January 2026 05:38:25 +0000 (0:00:02.522) 0:02:56.937 ******** 2026-01-30 05:38:47.885748 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:38:47.885764 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:38:47.885781 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:38:47.885796 | orchestrator | 2026-01-30 05:38:47.885812 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-30 05:38:47.885823 | orchestrator | Friday 30 January 2026 05:38:26 +0000 (0:00:00.855) 0:02:57.793 ******** 2026-01-30 05:38:47.885833 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:38:47.885843 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:38:47.885854 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:38:47.885864 | orchestrator | 2026-01-30 05:38:47.885873 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-30 05:38:47.885883 | orchestrator | Friday 30 January 2026 05:38:29 +0000 (0:00:02.722) 0:03:00.515 ******** 2026-01-30 05:38:47.885893 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:38:47.885903 | orchestrator | 2026-01-30 05:38:47.885913 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-01-30 05:38:47.885922 | orchestrator | Friday 30 January 2026 05:38:30 +0000 (0:00:01.152) 0:03:01.668 ******** 2026-01-30 05:38:47.885971 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:38:47.885981 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:38:47.885992 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:38:47.886002 | orchestrator | 2026-01-30 05:38:47.886011 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:38:47.886071 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-30 05:38:47.886082 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-30 05:38:47.886091 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-30 05:38:47.886099 | orchestrator | 2026-01-30 05:38:47.886108 | orchestrator | 2026-01-30 05:38:47.886116 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:38:47.886125 | orchestrator | Friday 30 January 2026 05:38:47 +0000 (0:00:16.701) 0:03:18.370 ******** 2026-01-30 05:38:47.886133 | orchestrator | =============================================================================== 2026-01-30 05:38:47.886153 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 72.10s 2026-01-30 05:38:47.886161 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 16.92s 2026-01-30 05:38:47.886170 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 16.70s 2026-01-30 05:38:47.886178 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.41s 2026-01-30 05:38:47.886187 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.11s 2026-01-30 05:38:47.886196 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.24s 2026-01-30 05:38:47.886204 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.53s 2026-01-30 05:38:47.886213 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.21s 2026-01-30 05:38:47.886221 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.96s 2026-01-30 05:38:47.886230 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.79s 2026-01-30 05:38:47.886257 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.77s 2026-01-30 05:38:47.886267 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.76s 2026-01-30 05:38:47.886275 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.72s 2026-01-30 05:38:47.886284 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.67s 2026-01-30 05:38:47.886293 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.62s 2026-01-30 05:38:47.886301 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.52s 2026-01-30 05:38:47.886310 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.42s 2026-01-30 05:38:47.886321 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.42s 2026-01-30 05:38:47.886341 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.41s 2026-01-30 05:38:47.886363 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.39s 2026-01-30 05:38:48.261179 | orchestrator | + osism apply -a upgrade rabbitmq 2026-01-30 05:38:50.326691 | orchestrator | 2026-01-30 05:38:50 | INFO  | Task 47c784cd-b666-40f3-a6bf-add9dc51409e (rabbitmq) was prepared for execution. 2026-01-30 05:38:50.326785 | orchestrator | 2026-01-30 05:38:50 | INFO  | It takes a moment until task 47c784cd-b666-40f3-a6bf-add9dc51409e (rabbitmq) has been started and output is visible here. 2026-01-30 05:39:34.731032 | orchestrator | 2026-01-30 05:39:34.731252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:39:34.731276 | orchestrator | 2026-01-30 05:39:34.731310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:39:34.731325 | orchestrator | Friday 30 January 2026 05:38:56 +0000 (0:00:01.585) 0:00:01.585 ******** 2026-01-30 05:39:34.731338 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:39:34.731347 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:39:34.731355 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:39:34.731363 | orchestrator | 2026-01-30 05:39:34.731371 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:39:34.731379 | orchestrator | Friday 30 January 2026 05:38:58 +0000 (0:00:01.919) 0:00:03.504 ******** 2026-01-30 05:39:34.731387 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-30 05:39:34.731396 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-30 05:39:34.731403 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-30 05:39:34.731411 | orchestrator | 2026-01-30 05:39:34.731419 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-30 05:39:34.731426 | orchestrator | 2026-01-30 05:39:34.731434 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 05:39:34.731442 | orchestrator | Friday 30 January 2026 05:39:00 +0000 (0:00:02.741) 0:00:06.246 ******** 2026-01-30 05:39:34.731469 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:39:34.731479 | orchestrator | 2026-01-30 05:39:34.731486 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-30 05:39:34.731494 | orchestrator | Friday 30 January 2026 05:39:02 +0000 (0:00:01.897) 0:00:08.143 ******** 2026-01-30 05:39:34.731502 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:39:34.731510 | orchestrator | 2026-01-30 05:39:34.731517 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-30 05:39:34.731526 | orchestrator | Friday 30 January 2026 05:39:04 +0000 (0:00:02.312) 0:00:10.456 ******** 2026-01-30 05:39:34.731534 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:39:34.731542 | orchestrator | 2026-01-30 05:39:34.731551 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-30 05:39:34.731561 | orchestrator | Friday 30 January 2026 05:39:08 +0000 (0:00:03.272) 0:00:13.729 ******** 2026-01-30 05:39:34.731571 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:39:34.731580 | orchestrator | 2026-01-30 05:39:34.731589 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-30 05:39:34.731598 | orchestrator | Friday 30 January 2026 05:39:18 +0000 (0:00:10.076) 0:00:23.805 ******** 2026-01-30 05:39:34.731607 | orchestrator | ok: [testbed-node-0] => { 2026-01-30 05:39:34.731616 | orchestrator |  "changed": false, 2026-01-30 05:39:34.731625 | orchestrator |  "msg": "All assertions passed" 2026-01-30 05:39:34.731634 | orchestrator | } 2026-01-30 05:39:34.731643 | orchestrator | 2026-01-30 05:39:34.731652 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-30 05:39:34.731661 | orchestrator | Friday 30 January 2026 05:39:19 +0000 (0:00:01.268) 0:00:25.073 ******** 2026-01-30 05:39:34.731670 | orchestrator | ok: [testbed-node-0] => { 2026-01-30 05:39:34.731679 | orchestrator |  "changed": false, 2026-01-30 05:39:34.731687 | orchestrator |  "msg": "All assertions passed" 2026-01-30 05:39:34.731696 | orchestrator | } 2026-01-30 05:39:34.731706 | orchestrator | 2026-01-30 05:39:34.731720 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 05:39:34.731733 | orchestrator | Friday 30 January 2026 05:39:21 +0000 (0:00:01.639) 0:00:26.713 ******** 2026-01-30 05:39:34.731747 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:39:34.731761 | orchestrator | 2026-01-30 05:39:34.731775 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-30 05:39:34.731788 | orchestrator | Friday 30 January 2026 05:39:22 +0000 (0:00:01.658) 0:00:28.371 ******** 2026-01-30 05:39:34.731802 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:39:34.731815 | orchestrator | 2026-01-30 05:39:34.731830 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-30 05:39:34.731844 | orchestrator | Friday 30 January 2026 05:39:25 +0000 (0:00:02.377) 0:00:30.748 ******** 2026-01-30 05:39:34.731858 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:39:34.731870 | orchestrator | 2026-01-30 05:39:34.731880 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-30 05:39:34.731889 | orchestrator | Friday 30 January 2026 05:39:28 +0000 (0:00:03.143) 0:00:33.891 ******** 2026-01-30 05:39:34.731898 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:39:34.731907 | orchestrator | 2026-01-30 05:39:34.731915 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-30 05:39:34.731923 | orchestrator | Friday 30 January 2026 05:39:30 +0000 (0:00:01.902) 0:00:35.794 ******** 2026-01-30 05:39:34.731966 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:34.731988 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:34.731999 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:34.732007 | orchestrator | 2026-01-30 05:39:34.732015 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-30 05:39:34.732023 | orchestrator | Friday 30 January 2026 05:39:32 +0000 (0:00:01.906) 0:00:37.700 ******** 2026-01-30 05:39:34.732032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:34.732086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:54.200585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:54.200682 | orchestrator | 2026-01-30 05:39:54.200693 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-30 05:39:54.200700 | orchestrator | Friday 30 January 2026 05:39:34 +0000 (0:00:02.501) 0:00:40.202 ******** 2026-01-30 05:39:54.200706 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 05:39:54.200714 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 05:39:54.200720 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-30 05:39:54.200727 | orchestrator | 2026-01-30 05:39:54.200733 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-30 05:39:54.200739 | orchestrator | Friday 30 January 2026 05:39:37 +0000 (0:00:02.415) 0:00:42.617 ******** 2026-01-30 05:39:54.200745 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 05:39:54.200752 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 05:39:54.200759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-30 05:39:54.200765 | orchestrator | 2026-01-30 05:39:54.200771 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-30 05:39:54.200778 | orchestrator | Friday 30 January 2026 05:39:40 +0000 (0:00:03.223) 0:00:45.841 ******** 2026-01-30 05:39:54.200783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 05:39:54.200790 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 05:39:54.200815 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-30 05:39:54.200822 | orchestrator | 2026-01-30 05:39:54.200828 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-30 05:39:54.200833 | orchestrator | Friday 30 January 2026 05:39:42 +0000 (0:00:02.337) 0:00:48.178 ******** 2026-01-30 05:39:54.200839 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 05:39:54.200845 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 05:39:54.200852 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-30 05:39:54.200858 | orchestrator | 2026-01-30 05:39:54.200865 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-30 05:39:54.200872 | orchestrator | Friday 30 January 2026 05:39:45 +0000 (0:00:02.392) 0:00:50.571 ******** 2026-01-30 05:39:54.200878 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 05:39:54.200884 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 05:39:54.200891 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-30 05:39:54.200897 | orchestrator | 2026-01-30 05:39:54.200904 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-30 05:39:54.200910 | orchestrator | Friday 30 January 2026 05:39:47 +0000 (0:00:02.379) 0:00:52.950 ******** 2026-01-30 05:39:54.200917 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 05:39:54.200929 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 05:39:54.200935 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-30 05:39:54.200942 | orchestrator | 2026-01-30 05:39:54.200949 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-30 05:39:54.200955 | orchestrator | Friday 30 January 2026 05:39:50 +0000 (0:00:02.550) 0:00:55.501 ******** 2026-01-30 05:39:54.200961 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:39:54.200968 | orchestrator | 2026-01-30 05:39:54.200988 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-01-30 05:39:54.200995 | orchestrator | Friday 30 January 2026 05:39:51 +0000 (0:00:01.676) 0:00:57.178 ******** 2026-01-30 05:39:54.201002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:54.201010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:54.201025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:39:54.201033 | orchestrator | 2026-01-30 05:39:54.201040 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-01-30 05:39:54.201047 | orchestrator | Friday 30 January 2026 05:39:54 +0000 (0:00:02.367) 0:00:59.545 ******** 2026-01-30 05:39:54.201063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.465851 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:40:03.465963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.466080 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:40:03.466098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.466201 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:40:03.466225 | orchestrator | 2026-01-30 05:40:03.466244 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-01-30 05:40:03.466264 | orchestrator | Friday 30 January 2026 05:39:55 +0000 (0:00:01.447) 0:01:00.993 ******** 2026-01-30 05:40:03.466302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.466350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.466388 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:40:03.466408 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:40:03.466429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:40:03.466451 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:40:03.466471 | orchestrator | 2026-01-30 05:40:03.466491 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-30 05:40:03.466511 | orchestrator | Friday 30 January 2026 05:39:57 +0000 (0:00:01.771) 0:01:02.765 ******** 2026-01-30 05:40:03.466531 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:40:03.466548 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:40:03.466568 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:40:03.466586 | orchestrator | 2026-01-30 05:40:03.466607 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-01-30 05:40:03.466626 | orchestrator | Friday 30 January 2026 05:40:01 +0000 (0:00:03.947) 0:01:06.713 ******** 2026-01-30 05:40:03.466656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:40:03.466692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:41:55.825331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-30 05:41:55.825472 | orchestrator | 2026-01-30 05:41:55.825494 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-01-30 05:41:55.825510 | orchestrator | Friday 30 January 2026 05:40:03 +0000 (0:00:02.228) 0:01:08.942 ******** 2026-01-30 05:41:55.825520 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:41:55.825528 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:41:55.825536 | orchestrator | } 2026-01-30 05:41:55.825544 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:41:55.825551 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:41:55.825558 | orchestrator | } 2026-01-30 05:41:55.825565 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:41:55.825572 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:41:55.825580 | orchestrator | } 2026-01-30 05:41:55.825587 | orchestrator | 2026-01-30 05:41:55.825595 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:41:55.825602 | orchestrator | Friday 30 January 2026 05:40:04 +0000 (0:00:01.375) 0:01:10.317 ******** 2026-01-30 05:41:55.825611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:41:55.825636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:41:55.825664 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:41:55.825672 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:41:55.825694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-30 05:41:55.825703 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:41:55.825710 | orchestrator | 2026-01-30 05:41:55.825718 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-30 05:41:55.825725 | orchestrator | Friday 30 January 2026 05:40:06 +0000 (0:00:01.972) 0:01:12.290 ******** 2026-01-30 05:41:55.825732 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:41:55.825739 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:41:55.825746 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:41:55.825753 | orchestrator | 2026-01-30 05:41:55.825760 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 05:41:55.825767 | orchestrator | 2026-01-30 05:41:55.825775 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 05:41:55.825782 | orchestrator | Friday 30 January 2026 05:40:09 +0000 (0:00:02.391) 0:01:14.681 ******** 2026-01-30 05:41:55.825789 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:41:55.825797 | orchestrator | 2026-01-30 05:41:55.825804 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 05:41:55.825811 | orchestrator | Friday 30 January 2026 05:40:11 +0000 (0:00:02.106) 0:01:16.787 ******** 2026-01-30 05:41:55.825818 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:41:55.825825 | orchestrator | 2026-01-30 05:41:55.825832 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 05:41:55.825839 | orchestrator | Friday 30 January 2026 05:40:21 +0000 (0:00:10.174) 0:01:26.962 ******** 2026-01-30 05:41:55.825846 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:41:55.825854 | orchestrator | 2026-01-30 05:41:55.825861 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 05:41:55.825868 | orchestrator | Friday 30 January 2026 05:40:30 +0000 (0:00:09.281) 0:01:36.244 ******** 2026-01-30 05:41:55.825875 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:41:55.825882 | orchestrator | 2026-01-30 05:41:55.825889 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 05:41:55.825896 | orchestrator | 2026-01-30 05:41:55.825904 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 05:41:55.825911 | orchestrator | Friday 30 January 2026 05:40:41 +0000 (0:00:10.820) 0:01:47.065 ******** 2026-01-30 05:41:55.825918 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:41:55.825925 | orchestrator | 2026-01-30 05:41:55.825932 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 05:41:55.825946 | orchestrator | Friday 30 January 2026 05:40:43 +0000 (0:00:01.906) 0:01:48.971 ******** 2026-01-30 05:41:55.825954 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:41:55.825961 | orchestrator | 2026-01-30 05:41:55.825968 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 05:41:55.825975 | orchestrator | Friday 30 January 2026 05:40:53 +0000 (0:00:10.509) 0:01:59.481 ******** 2026-01-30 05:41:55.825982 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:41:55.825989 | orchestrator | 2026-01-30 05:41:55.825996 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 05:41:55.826003 | orchestrator | Friday 30 January 2026 05:41:08 +0000 (0:00:14.803) 0:02:14.284 ******** 2026-01-30 05:41:55.826066 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:41:55.826077 | orchestrator | 2026-01-30 05:41:55.826084 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-30 05:41:55.826092 | orchestrator | 2026-01-30 05:41:55.826099 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-30 05:41:55.826106 | orchestrator | Friday 30 January 2026 05:41:20 +0000 (0:00:11.325) 0:02:25.610 ******** 2026-01-30 05:41:55.826113 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:41:55.826121 | orchestrator | 2026-01-30 05:41:55.826128 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-30 05:41:55.826135 | orchestrator | Friday 30 January 2026 05:41:21 +0000 (0:00:01.766) 0:02:27.376 ******** 2026-01-30 05:41:55.826142 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:41:55.826149 | orchestrator | 2026-01-30 05:41:55.826157 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-30 05:41:55.826164 | orchestrator | Friday 30 January 2026 05:41:31 +0000 (0:00:09.785) 0:02:37.162 ******** 2026-01-30 05:41:55.826171 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:41:55.826178 | orchestrator | 2026-01-30 05:41:55.826185 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-30 05:41:55.826192 | orchestrator | Friday 30 January 2026 05:41:45 +0000 (0:00:13.582) 0:02:50.744 ******** 2026-01-30 05:41:55.826200 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:41:55.826207 | orchestrator | 2026-01-30 05:41:55.826214 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-30 05:41:55.826221 | orchestrator | 2026-01-30 05:41:55.826228 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-30 05:41:55.826242 | orchestrator | Friday 30 January 2026 05:41:55 +0000 (0:00:10.548) 0:03:01.293 ******** 2026-01-30 05:42:02.188017 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:42:02.188118 | orchestrator | 2026-01-30 05:42:02.188130 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-30 05:42:02.188137 | orchestrator | Friday 30 January 2026 05:41:57 +0000 (0:00:01.349) 0:03:02.642 ******** 2026-01-30 05:42:02.188144 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:42:02.188152 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:42:02.188159 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:42:02.188165 | orchestrator | 2026-01-30 05:42:02.188172 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:42:02.188179 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:42:02.188189 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 05:42:02.188196 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-30 05:42:02.188202 | orchestrator | 2026-01-30 05:42:02.188208 | orchestrator | 2026-01-30 05:42:02.188215 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:42:02.188221 | orchestrator | Friday 30 January 2026 05:42:01 +0000 (0:00:04.661) 0:03:07.304 ******** 2026-01-30 05:42:02.188254 | orchestrator | =============================================================================== 2026-01-30 05:42:02.188261 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.67s 2026-01-30 05:42:02.188267 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 32.70s 2026-01-30 05:42:02.188274 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 30.47s 2026-01-30 05:42:02.188280 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.08s 2026-01-30 05:42:02.188286 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.78s 2026-01-30 05:42:02.188293 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.66s 2026-01-30 05:42:02.188299 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.95s 2026-01-30 05:42:02.188305 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.27s 2026-01-30 05:42:02.188311 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.22s 2026-01-30 05:42:02.188317 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.14s 2026-01-30 05:42:02.188323 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.74s 2026-01-30 05:42:02.188330 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.55s 2026-01-30 05:42:02.188336 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.50s 2026-01-30 05:42:02.188342 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.42s 2026-01-30 05:42:02.188349 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.39s 2026-01-30 05:42:02.188435 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 2.39s 2026-01-30 05:42:02.188442 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.38s 2026-01-30 05:42:02.188449 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.38s 2026-01-30 05:42:02.188455 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.37s 2026-01-30 05:42:02.188461 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.34s 2026-01-30 05:42:02.491325 | orchestrator | + osism apply -a upgrade openvswitch 2026-01-30 05:42:04.525177 | orchestrator | 2026-01-30 05:42:04 | INFO  | Task 3e606c5f-8c18-439e-a8da-78454c8c9a35 (openvswitch) was prepared for execution. 2026-01-30 05:42:04.525267 | orchestrator | 2026-01-30 05:42:04 | INFO  | It takes a moment until task 3e606c5f-8c18-439e-a8da-78454c8c9a35 (openvswitch) has been started and output is visible here. 2026-01-30 05:42:21.733221 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:42:21.733338 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:42:21.733369 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:42:21.733380 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:42:21.733489 | orchestrator | 2026-01-30 05:42:21.733501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:42:21.733513 | orchestrator | 2026-01-30 05:42:21.733524 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:42:21.733535 | orchestrator | Friday 30 January 2026 05:42:10 +0000 (0:00:01.407) 0:00:01.407 ******** 2026-01-30 05:42:21.733546 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:42:21.733558 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:42:21.733570 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:42:21.733580 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:42:21.733614 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:42:21.733625 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:42:21.733635 | orchestrator | 2026-01-30 05:42:21.733647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:42:21.733660 | orchestrator | Friday 30 January 2026 05:42:11 +0000 (0:00:01.340) 0:00:02.748 ******** 2026-01-30 05:42:21.733672 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733685 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733698 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733710 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733723 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733736 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-30 05:42:21.733748 | orchestrator | 2026-01-30 05:42:21.733760 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-30 05:42:21.733773 | orchestrator | 2026-01-30 05:42:21.733785 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-30 05:42:21.733798 | orchestrator | Friday 30 January 2026 05:42:12 +0000 (0:00:00.978) 0:00:03.726 ******** 2026-01-30 05:42:21.733812 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:42:21.733825 | orchestrator | 2026-01-30 05:42:21.733838 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-30 05:42:21.733850 | orchestrator | Friday 30 January 2026 05:42:14 +0000 (0:00:01.644) 0:00:05.371 ******** 2026-01-30 05:42:21.733863 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-01-30 05:42:21.733882 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-01-30 05:42:21.733901 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-01-30 05:42:21.733920 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-01-30 05:42:21.733940 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-01-30 05:42:21.733960 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-01-30 05:42:21.733979 | orchestrator | 2026-01-30 05:42:21.733999 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-30 05:42:21.734097 | orchestrator | Friday 30 January 2026 05:42:15 +0000 (0:00:01.515) 0:00:06.887 ******** 2026-01-30 05:42:21.734111 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-01-30 05:42:21.734122 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-01-30 05:42:21.734133 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-01-30 05:42:21.734144 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-01-30 05:42:21.734154 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-01-30 05:42:21.734165 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-01-30 05:42:21.734175 | orchestrator | 2026-01-30 05:42:21.734186 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-30 05:42:21.734197 | orchestrator | Friday 30 January 2026 05:42:17 +0000 (0:00:01.494) 0:00:08.381 ******** 2026-01-30 05:42:21.734207 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-30 05:42:21.734218 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:42:21.734229 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-30 05:42:21.734240 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:42:21.734250 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-30 05:42:21.734261 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:42:21.734272 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-30 05:42:21.734282 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:42:21.734293 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-30 05:42:21.734315 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:42:21.734325 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-30 05:42:21.734336 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:42:21.734347 | orchestrator | 2026-01-30 05:42:21.734357 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-30 05:42:21.734368 | orchestrator | Friday 30 January 2026 05:42:18 +0000 (0:00:01.801) 0:00:10.183 ******** 2026-01-30 05:42:21.734409 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:42:21.734431 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:42:21.734442 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:42:21.734452 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:42:21.734463 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:42:21.734495 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:42:21.734507 | orchestrator | 2026-01-30 05:42:21.734517 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-30 05:42:21.734528 | orchestrator | Friday 30 January 2026 05:42:19 +0000 (0:00:00.982) 0:00:11.166 ******** 2026-01-30 05:42:21.734542 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:21.734562 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:21.734574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:21.734585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:21.734604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:21.734632 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109710 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109786 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109794 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109800 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109847 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109852 | orchestrator | 2026-01-30 05:42:24.109858 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-30 05:42:24.109864 | orchestrator | Friday 30 January 2026 05:42:21 +0000 (0:00:01.801) 0:00:12.967 ******** 2026-01-30 05:42:24.109868 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109874 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109878 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109887 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:24.109903 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:27.659963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660038 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660059 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660073 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660077 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660091 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660096 | orchestrator | 2026-01-30 05:42:27.660102 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-30 05:42:27.660107 | orchestrator | Friday 30 January 2026 05:42:24 +0000 (0:00:02.479) 0:00:15.446 ******** 2026-01-30 05:42:27.660111 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:42:27.660115 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:42:27.660119 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:42:27.660123 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:42:27.660127 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:42:27.660130 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:42:27.660134 | orchestrator | 2026-01-30 05:42:27.660138 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-01-30 05:42:27.660142 | orchestrator | Friday 30 January 2026 05:42:25 +0000 (0:00:01.318) 0:00:16.765 ******** 2026-01-30 05:42:27.660146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:27.660174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:29.031896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032024 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032044 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-30 05:42:29.032098 | orchestrator | 2026-01-30 05:42:29.032106 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-01-30 05:42:29.032115 | orchestrator | Friday 30 January 2026 05:42:27 +0000 (0:00:02.247) 0:00:19.012 ******** 2026-01-30 05:42:29.032123 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:42:29.032131 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032138 | orchestrator | } 2026-01-30 05:42:29.032143 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:42:29.032149 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032155 | orchestrator | } 2026-01-30 05:42:29.032161 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:42:29.032168 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032174 | orchestrator | } 2026-01-30 05:42:29.032179 | orchestrator | changed: [testbed-node-3] => { 2026-01-30 05:42:29.032186 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032192 | orchestrator | } 2026-01-30 05:42:29.032198 | orchestrator | changed: [testbed-node-4] => { 2026-01-30 05:42:29.032205 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032211 | orchestrator | } 2026-01-30 05:42:29.032217 | orchestrator | changed: [testbed-node-5] => { 2026-01-30 05:42:29.032223 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:42:29.032230 | orchestrator | } 2026-01-30 05:42:29.032236 | orchestrator | 2026-01-30 05:42:29.032243 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:42:29.032249 | orchestrator | Friday 30 January 2026 05:42:28 +0000 (0:00:00.948) 0:00:19.960 ******** 2026-01-30 05:42:29.032260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:29.032267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:29.032273 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:42:29.032279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:29.032295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:54.140133 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:42:54.140250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:54.140269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:54.140281 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:42:54.140307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:54.140318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:54.140351 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-30 05:42:54.140364 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-30 05:42:54.140385 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:42:54.140395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:54.140422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:54.140433 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:42:54.140468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-01-30 05:42:54.140498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-01-30 05:42:54.140510 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:42:54.140529 | orchestrator | 2026-01-30 05:42:54.140539 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140549 | orchestrator | Friday 30 January 2026 05:42:30 +0000 (0:00:01.746) 0:00:21.707 ******** 2026-01-30 05:42:54.140567 | orchestrator | 2026-01-30 05:42:54.140578 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140588 | orchestrator | Friday 30 January 2026 05:42:30 +0000 (0:00:00.160) 0:00:21.867 ******** 2026-01-30 05:42:54.140597 | orchestrator | 2026-01-30 05:42:54.140606 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140616 | orchestrator | Friday 30 January 2026 05:42:30 +0000 (0:00:00.139) 0:00:22.007 ******** 2026-01-30 05:42:54.140624 | orchestrator | 2026-01-30 05:42:54.140634 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140643 | orchestrator | Friday 30 January 2026 05:42:30 +0000 (0:00:00.137) 0:00:22.145 ******** 2026-01-30 05:42:54.140652 | orchestrator | 2026-01-30 05:42:54.140662 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140671 | orchestrator | Friday 30 January 2026 05:42:31 +0000 (0:00:00.327) 0:00:22.472 ******** 2026-01-30 05:42:54.140681 | orchestrator | 2026-01-30 05:42:54.140691 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-30 05:42:54.140701 | orchestrator | Friday 30 January 2026 05:42:31 +0000 (0:00:00.145) 0:00:22.618 ******** 2026-01-30 05:42:54.140711 | orchestrator | 2026-01-30 05:42:54.140721 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-30 05:42:54.140731 | orchestrator | Friday 30 January 2026 05:42:31 +0000 (0:00:00.145) 0:00:22.763 ******** 2026-01-30 05:42:54.140742 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:42:54.140752 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:42:54.140762 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:42:54.140773 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:42:54.140784 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:42:54.140795 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:42:54.140804 | orchestrator | 2026-01-30 05:42:54.140814 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-30 05:42:54.140825 | orchestrator | Friday 30 January 2026 05:42:42 +0000 (0:00:10.971) 0:00:33.735 ******** 2026-01-30 05:42:54.140834 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:42:54.140845 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:42:54.140855 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:42:54.140864 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:42:54.140873 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:42:54.140881 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:42:54.140890 | orchestrator | 2026-01-30 05:42:54.140898 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-30 05:42:54.140907 | orchestrator | Friday 30 January 2026 05:42:43 +0000 (0:00:01.162) 0:00:34.898 ******** 2026-01-30 05:42:54.140916 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:42:54.140935 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:43:08.142218 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:43:08.142298 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:43:08.142304 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:43:08.142309 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:43:08.142313 | orchestrator | 2026-01-30 05:43:08.142318 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-30 05:43:08.142323 | orchestrator | Friday 30 January 2026 05:42:54 +0000 (0:00:10.479) 0:00:45.377 ******** 2026-01-30 05:43:08.142328 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-30 05:43:08.142334 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-30 05:43:08.142338 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-30 05:43:08.142342 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-30 05:43:08.142346 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-30 05:43:08.142366 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-30 05:43:08.142370 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-30 05:43:08.142374 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-30 05:43:08.142378 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-30 05:43:08.142382 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-30 05:43:08.142386 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-30 05:43:08.142390 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-30 05:43:08.142394 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142407 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142411 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142415 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142418 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142422 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-30 05:43:08.142426 | orchestrator | 2026-01-30 05:43:08.142430 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-30 05:43:08.142434 | orchestrator | Friday 30 January 2026 05:43:01 +0000 (0:00:07.015) 0:00:52.392 ******** 2026-01-30 05:43:08.142438 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-30 05:43:08.142442 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:43:08.142446 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-30 05:43:08.142450 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:43:08.142453 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-30 05:43:08.142457 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:43:08.142461 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-01-30 05:43:08.142502 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-01-30 05:43:08.142506 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-01-30 05:43:08.142510 | orchestrator | 2026-01-30 05:43:08.142514 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-30 05:43:08.142518 | orchestrator | Friday 30 January 2026 05:43:03 +0000 (0:00:02.332) 0:00:54.725 ******** 2026-01-30 05:43:08.142522 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-30 05:43:08.142526 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:43:08.142530 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-30 05:43:08.142534 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:43:08.142537 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-30 05:43:08.142541 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:43:08.142545 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-30 05:43:08.142549 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-30 05:43:08.142553 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-30 05:43:08.142556 | orchestrator | 2026-01-30 05:43:08.142560 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:43:08.142565 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:43:08.142575 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:43:08.142588 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-30 05:43:08.142592 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:43:08.142596 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:43:08.142600 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:43:08.142604 | orchestrator | 2026-01-30 05:43:08.142608 | orchestrator | 2026-01-30 05:43:08.142611 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:43:08.142615 | orchestrator | Friday 30 January 2026 05:43:07 +0000 (0:00:04.214) 0:00:58.939 ******** 2026-01-30 05:43:08.142619 | orchestrator | =============================================================================== 2026-01-30 05:43:08.142623 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.97s 2026-01-30 05:43:08.142626 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.48s 2026-01-30 05:43:08.142630 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.01s 2026-01-30 05:43:08.142634 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.21s 2026-01-30 05:43:08.142637 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.48s 2026-01-30 05:43:08.142641 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.33s 2026-01-30 05:43:08.142645 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.25s 2026-01-30 05:43:08.142649 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.80s 2026-01-30 05:43:08.142652 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.80s 2026-01-30 05:43:08.142656 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.75s 2026-01-30 05:43:08.142660 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.65s 2026-01-30 05:43:08.142664 | orchestrator | module-load : Load modules ---------------------------------------------- 1.52s 2026-01-30 05:43:08.142671 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-01-30 05:43:08.142674 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.34s 2026-01-30 05:43:08.142678 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2026-01-30 05:43:08.142682 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.16s 2026-01-30 05:43:08.142686 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.06s 2026-01-30 05:43:08.142689 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.98s 2026-01-30 05:43:08.142693 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2026-01-30 05:43:08.142697 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.95s 2026-01-30 05:43:08.435227 | orchestrator | + osism apply -a upgrade ovn 2026-01-30 05:43:10.458885 | orchestrator | 2026-01-30 05:43:10 | INFO  | Task 666d8f28-63b5-4f2b-b18e-9523adccca00 (ovn) was prepared for execution. 2026-01-30 05:43:10.458967 | orchestrator | 2026-01-30 05:43:10 | INFO  | It takes a moment until task 666d8f28-63b5-4f2b-b18e-9523adccca00 (ovn) has been started and output is visible here. 2026-01-30 05:43:23.862791 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-01-30 05:43:23.862926 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-01-30 05:43:23.862967 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-01-30 05:43:23.862985 | orchestrator | (): 'NoneType' object is not subscriptable 2026-01-30 05:43:23.863020 | orchestrator | 2026-01-30 05:43:23.863035 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-30 05:43:23.863045 | orchestrator | 2026-01-30 05:43:23.863055 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-30 05:43:23.863065 | orchestrator | Friday 30 January 2026 05:43:15 +0000 (0:00:00.931) 0:00:00.931 ******** 2026-01-30 05:43:23.863075 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:43:23.863086 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:43:23.863095 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:43:23.863105 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:43:23.863115 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:43:23.863124 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:43:23.863133 | orchestrator | 2026-01-30 05:43:23.863145 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-30 05:43:23.863161 | orchestrator | Friday 30 January 2026 05:43:17 +0000 (0:00:01.445) 0:00:02.376 ******** 2026-01-30 05:43:23.863179 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-30 05:43:23.863195 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-30 05:43:23.863212 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-30 05:43:23.863228 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-30 05:43:23.863244 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-30 05:43:23.863261 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-30 05:43:23.863277 | orchestrator | 2026-01-30 05:43:23.863294 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-30 05:43:23.863311 | orchestrator | 2026-01-30 05:43:23.863329 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-30 05:43:23.863348 | orchestrator | Friday 30 January 2026 05:43:18 +0000 (0:00:01.184) 0:00:03.561 ******** 2026-01-30 05:43:23.863366 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:43:23.863385 | orchestrator | 2026-01-30 05:43:23.863403 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-30 05:43:23.863421 | orchestrator | Friday 30 January 2026 05:43:19 +0000 (0:00:01.489) 0:00:05.050 ******** 2026-01-30 05:43:23.863443 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863459 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863610 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863652 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863673 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863691 | orchestrator | 2026-01-30 05:43:23.863710 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-30 05:43:23.863728 | orchestrator | Friday 30 January 2026 05:43:21 +0000 (0:00:01.356) 0:00:06.406 ******** 2026-01-30 05:43:23.863745 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863764 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863784 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863803 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863821 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863857 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863877 | orchestrator | 2026-01-30 05:43:23.863894 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-30 05:43:23.863911 | orchestrator | Friday 30 January 2026 05:43:22 +0000 (0:00:01.471) 0:00:07.878 ******** 2026-01-30 05:43:23.863931 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:23.863962 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758467 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758612 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758629 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758642 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758654 | orchestrator | 2026-01-30 05:43:27.758667 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-30 05:43:27.758684 | orchestrator | Friday 30 January 2026 05:43:23 +0000 (0:00:01.140) 0:00:09.019 ******** 2026-01-30 05:43:27.758738 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758777 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758798 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758901 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758919 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758930 | orchestrator | 2026-01-30 05:43:27.758942 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-01-30 05:43:27.758953 | orchestrator | Friday 30 January 2026 05:43:25 +0000 (0:00:01.909) 0:00:10.928 ******** 2026-01-30 05:43:27.758965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.758983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.759016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.759037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.759066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.759087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:43:27.759104 | orchestrator | 2026-01-30 05:43:27.759117 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-01-30 05:43:27.759131 | orchestrator | Friday 30 January 2026 05:43:27 +0000 (0:00:01.270) 0:00:12.198 ******** 2026-01-30 05:43:27.759144 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:43:27.759158 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:27.759170 | orchestrator | } 2026-01-30 05:43:27.759183 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:43:27.759195 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:27.759207 | orchestrator | } 2026-01-30 05:43:27.759219 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:43:27.759231 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:27.759243 | orchestrator | } 2026-01-30 05:43:27.759255 | orchestrator | changed: [testbed-node-3] => { 2026-01-30 05:43:27.759267 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:27.759280 | orchestrator | } 2026-01-30 05:43:27.759293 | orchestrator | changed: [testbed-node-4] => { 2026-01-30 05:43:27.759305 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:27.759318 | orchestrator | } 2026-01-30 05:43:27.759339 | orchestrator | changed: [testbed-node-5] => { 2026-01-30 05:43:54.604630 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:43:54.604734 | orchestrator | } 2026-01-30 05:43:54.604744 | orchestrator | 2026-01-30 05:43:54.604753 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:43:54.604763 | orchestrator | Friday 30 January 2026 05:43:27 +0000 (0:00:00.712) 0:00:12.911 ******** 2026-01-30 05:43:54.604773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604784 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:43:54.604817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604825 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:43:54.604832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604839 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:43:54.604847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604854 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:43:54.604861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604869 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:43:54.604889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:43:54.604897 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:43:54.604905 | orchestrator | 2026-01-30 05:43:54.604911 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-30 05:43:54.604918 | orchestrator | Friday 30 January 2026 05:43:29 +0000 (0:00:01.501) 0:00:14.413 ******** 2026-01-30 05:43:54.604925 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:43:54.604933 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:43:54.604940 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:43:54.604947 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:43:54.604954 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:43:54.604962 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:43:54.604969 | orchestrator | 2026-01-30 05:43:54.604977 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-30 05:43:54.604985 | orchestrator | Friday 30 January 2026 05:43:32 +0000 (0:00:02.774) 0:00:17.188 ******** 2026-01-30 05:43:54.604992 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-01-30 05:43:54.605000 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-01-30 05:43:54.605015 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-30 05:43:54.605038 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-30 05:43:54.605050 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-30 05:43:54.605057 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-30 05:43:54.605062 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-30 05:43:54.605068 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-30 05:43:54.605075 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605081 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605088 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605096 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605104 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605112 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-30 05:43:54.605120 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605138 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605146 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605154 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605161 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-01-30 05:43:54.605168 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605175 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605182 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605189 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605196 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605202 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-30 05:43:54.605210 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605218 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605226 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605239 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605247 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605255 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-30 05:43:54.605263 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605270 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605284 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605290 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605296 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605302 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-30 05:43:54.605308 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 05:43:54.605314 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 05:43:54.605320 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-30 05:43:54.605328 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 05:43:54.605341 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 05:46:17.696581 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-30 05:46:17.696674 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-30 05:46:17.696683 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-30 05:46:17.696701 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-30 05:46:17.696761 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-30 05:46:17.696770 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-30 05:46:17.696778 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 05:46:17.696785 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 05:46:17.696791 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-30 05:46:17.696798 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-30 05:46:17.696804 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 05:46:17.696812 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 05:46:17.696818 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-30 05:46:17.696825 | orchestrator | 2026-01-30 05:46:17.696832 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696839 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:22.053) 0:00:39.241 ******** 2026-01-30 05:46:17.696845 | orchestrator | 2026-01-30 05:46:17.696851 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696858 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.090) 0:00:39.332 ******** 2026-01-30 05:46:17.696864 | orchestrator | 2026-01-30 05:46:17.696870 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696876 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.074) 0:00:39.406 ******** 2026-01-30 05:46:17.696882 | orchestrator | 2026-01-30 05:46:17.696908 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696914 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.069) 0:00:39.476 ******** 2026-01-30 05:46:17.696920 | orchestrator | 2026-01-30 05:46:17.696927 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696933 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.110) 0:00:39.586 ******** 2026-01-30 05:46:17.696939 | orchestrator | 2026-01-30 05:46:17.696945 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-30 05:46:17.696951 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.070) 0:00:39.657 ******** 2026-01-30 05:46:17.696957 | orchestrator | 2026-01-30 05:46:17.696969 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-30 05:46:17.696975 | orchestrator | Friday 30 January 2026 05:43:54 +0000 (0:00:00.071) 0:00:39.728 ******** 2026-01-30 05:46:17.696981 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:46:17.696988 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:46:17.696994 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:46:17.697004 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:46:17.697016 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:46:17.697027 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:46:17.697038 | orchestrator | 2026-01-30 05:46:17.697048 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-30 05:46:17.697058 | orchestrator | 2026-01-30 05:46:17.697069 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 05:46:17.697079 | orchestrator | Friday 30 January 2026 05:46:05 +0000 (0:02:11.245) 0:02:50.974 ******** 2026-01-30 05:46:17.697089 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:46:17.697100 | orchestrator | 2026-01-30 05:46:17.697110 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 05:46:17.697121 | orchestrator | Friday 30 January 2026 05:46:06 +0000 (0:00:01.146) 0:02:52.120 ******** 2026-01-30 05:46:17.697132 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-30 05:46:17.697143 | orchestrator | 2026-01-30 05:46:17.697155 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-30 05:46:17.697166 | orchestrator | Friday 30 January 2026 05:46:08 +0000 (0:00:01.124) 0:02:53.245 ******** 2026-01-30 05:46:17.697178 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697190 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697199 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697206 | orchestrator | 2026-01-30 05:46:17.697213 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-30 05:46:17.697234 | orchestrator | Friday 30 January 2026 05:46:08 +0000 (0:00:00.839) 0:02:54.085 ******** 2026-01-30 05:46:17.697241 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697248 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697255 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697262 | orchestrator | 2026-01-30 05:46:17.697269 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-30 05:46:17.697276 | orchestrator | Friday 30 January 2026 05:46:09 +0000 (0:00:00.364) 0:02:54.449 ******** 2026-01-30 05:46:17.697283 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697292 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697303 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697313 | orchestrator | 2026-01-30 05:46:17.697324 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-30 05:46:17.697334 | orchestrator | Friday 30 January 2026 05:46:09 +0000 (0:00:00.331) 0:02:54.780 ******** 2026-01-30 05:46:17.697344 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697354 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697363 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697373 | orchestrator | 2026-01-30 05:46:17.697382 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-30 05:46:17.697400 | orchestrator | Friday 30 January 2026 05:46:10 +0000 (0:00:00.625) 0:02:55.406 ******** 2026-01-30 05:46:17.697411 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697421 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697432 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697442 | orchestrator | 2026-01-30 05:46:17.697453 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-30 05:46:17.697464 | orchestrator | Friday 30 January 2026 05:46:10 +0000 (0:00:00.370) 0:02:55.776 ******** 2026-01-30 05:46:17.697473 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:46:17.697483 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:46:17.697492 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:46:17.697502 | orchestrator | 2026-01-30 05:46:17.697513 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-30 05:46:17.697524 | orchestrator | Friday 30 January 2026 05:46:10 +0000 (0:00:00.349) 0:02:56.125 ******** 2026-01-30 05:46:17.697535 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697546 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697556 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697566 | orchestrator | 2026-01-30 05:46:17.697576 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-30 05:46:17.697586 | orchestrator | Friday 30 January 2026 05:46:11 +0000 (0:00:00.750) 0:02:56.876 ******** 2026-01-30 05:46:17.697597 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697608 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697618 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697629 | orchestrator | 2026-01-30 05:46:17.697637 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-30 05:46:17.697643 | orchestrator | Friday 30 January 2026 05:46:12 +0000 (0:00:00.640) 0:02:57.517 ******** 2026-01-30 05:46:17.697650 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697656 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697662 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697668 | orchestrator | 2026-01-30 05:46:17.697674 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-30 05:46:17.697680 | orchestrator | Friday 30 January 2026 05:46:13 +0000 (0:00:00.872) 0:02:58.389 ******** 2026-01-30 05:46:17.697686 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697692 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697698 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697704 | orchestrator | 2026-01-30 05:46:17.697730 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-30 05:46:17.697736 | orchestrator | Friday 30 January 2026 05:46:13 +0000 (0:00:00.381) 0:02:58.771 ******** 2026-01-30 05:46:17.697742 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:46:17.697748 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:46:17.697754 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:46:17.697760 | orchestrator | 2026-01-30 05:46:17.697766 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-30 05:46:17.697772 | orchestrator | Friday 30 January 2026 05:46:14 +0000 (0:00:00.601) 0:02:59.373 ******** 2026-01-30 05:46:17.697779 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:46:17.697791 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:46:17.697798 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:46:17.697804 | orchestrator | 2026-01-30 05:46:17.697810 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-30 05:46:17.697817 | orchestrator | Friday 30 January 2026 05:46:14 +0000 (0:00:00.362) 0:02:59.735 ******** 2026-01-30 05:46:17.697823 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697829 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697835 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697841 | orchestrator | 2026-01-30 05:46:17.697847 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-30 05:46:17.697853 | orchestrator | Friday 30 January 2026 05:46:15 +0000 (0:00:00.854) 0:03:00.590 ******** 2026-01-30 05:46:17.697866 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697872 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697878 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697884 | orchestrator | 2026-01-30 05:46:17.697890 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-30 05:46:17.697897 | orchestrator | Friday 30 January 2026 05:46:15 +0000 (0:00:00.444) 0:03:01.034 ******** 2026-01-30 05:46:17.697903 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697909 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697915 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697921 | orchestrator | 2026-01-30 05:46:17.697927 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-30 05:46:17.697933 | orchestrator | Friday 30 January 2026 05:46:16 +0000 (0:00:01.111) 0:03:02.145 ******** 2026-01-30 05:46:17.697939 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:46:17.697945 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:46:17.697951 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:46:17.697957 | orchestrator | 2026-01-30 05:46:17.697963 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-30 05:46:17.697970 | orchestrator | Friday 30 January 2026 05:46:17 +0000 (0:00:00.359) 0:03:02.505 ******** 2026-01-30 05:46:17.697976 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:46:17.697982 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:46:17.697988 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:46:17.697994 | orchestrator | 2026-01-30 05:46:17.698008 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-30 05:46:26.790932 | orchestrator | Friday 30 January 2026 05:46:17 +0000 (0:00:00.338) 0:03:02.843 ******** 2026-01-30 05:46:26.791025 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:46:26.791036 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:46:26.791043 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:46:26.791050 | orchestrator | 2026-01-30 05:46:26.791058 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-30 05:46:26.791065 | orchestrator | Friday 30 January 2026 05:46:18 +0000 (0:00:00.682) 0:03:03.526 ******** 2026-01-30 05:46:26.791075 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791085 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791093 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791100 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791142 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791150 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:26.791186 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:26.791200 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:26.791220 | orchestrator | 2026-01-30 05:46:26.791231 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-30 05:46:26.791239 | orchestrator | Friday 30 January 2026 05:46:21 +0000 (0:00:03.156) 0:03:06.683 ******** 2026-01-30 05:46:26.791246 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:26.791266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078131 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078267 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078311 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:37.078340 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:37.078409 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:37.078425 | orchestrator | 2026-01-30 05:46:37.078433 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-01-30 05:46:37.078441 | orchestrator | Friday 30 January 2026 05:46:26 +0000 (0:00:05.263) 0:03:11.947 ******** 2026-01-30 05:46:37.078449 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-01-30 05:46:37.078455 | orchestrator | 2026-01-30 05:46:37.078460 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-01-30 05:46:37.078473 | orchestrator | Friday 30 January 2026 05:46:27 +0000 (0:00:01.200) 0:03:13.147 ******** 2026-01-30 05:46:37.078479 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:46:37.078485 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:46:37.078491 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:46:37.078496 | orchestrator | 2026-01-30 05:46:37.078501 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-01-30 05:46:37.078507 | orchestrator | Friday 30 January 2026 05:46:28 +0000 (0:00:00.759) 0:03:13.906 ******** 2026-01-30 05:46:37.078512 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:46:37.078518 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:46:37.078523 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:46:37.078528 | orchestrator | 2026-01-30 05:46:37.078534 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-01-30 05:46:37.078540 | orchestrator | Friday 30 January 2026 05:46:30 +0000 (0:00:01.751) 0:03:15.658 ******** 2026-01-30 05:46:37.078546 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:46:37.078551 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:46:37.078556 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:46:37.078562 | orchestrator | 2026-01-30 05:46:37.078567 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-01-30 05:46:37.078573 | orchestrator | Friday 30 January 2026 05:46:32 +0000 (0:00:01.882) 0:03:17.540 ******** 2026-01-30 05:46:37.078582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:37.078613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:39.916237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:39.916328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:39.916340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:39.916371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:46:39.916382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916403 | orchestrator | 2026-01-30 05:46:39.916411 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-30 05:46:39.916418 | orchestrator | Friday 30 January 2026 05:46:37 +0000 (0:00:04.687) 0:03:22.228 ******** 2026-01-30 05:46:39.916425 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:46:39.916432 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:46:39.916437 | orchestrator | } 2026-01-30 05:46:39.916443 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:46:39.916448 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:46:39.916454 | orchestrator | } 2026-01-30 05:46:39.916459 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:46:39.916465 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:46:39.916470 | orchestrator | } 2026-01-30 05:46:39.916475 | orchestrator | 2026-01-30 05:46:39.916494 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-01-30 05:46:39.916500 | orchestrator | Friday 30 January 2026 05:46:37 +0000 (0:00:00.418) 0:03:22.646 ******** 2026-01-30 05:46:39.916506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:46:39.916561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:47:57.154523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-30 05:47:57.154640 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-30 05:47:57.154658 | orchestrator | 2026-01-30 05:47:57.154671 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-01-30 05:47:57.154684 | orchestrator | Friday 30 January 2026 05:46:39 +0000 (0:00:02.419) 0:03:25.066 ******** 2026-01-30 05:47:57.154695 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-01-30 05:47:57.154722 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-01-30 05:47:57.154734 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-01-30 05:47:57.154744 | orchestrator | 2026-01-30 05:47:57.154756 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-01-30 05:47:57.154768 | orchestrator | Friday 30 January 2026 05:46:41 +0000 (0:00:01.343) 0:03:26.409 ******** 2026-01-30 05:47:57.154779 | orchestrator | changed: [testbed-node-0] => { 2026-01-30 05:47:57.154790 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:47:57.154801 | orchestrator | } 2026-01-30 05:47:57.154882 | orchestrator | changed: [testbed-node-1] => { 2026-01-30 05:47:57.154895 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:47:57.154906 | orchestrator | } 2026-01-30 05:47:57.154917 | orchestrator | changed: [testbed-node-2] => { 2026-01-30 05:47:57.154927 | orchestrator |  "msg": "Notifying handlers" 2026-01-30 05:47:57.154938 | orchestrator | } 2026-01-30 05:47:57.154949 | orchestrator | 2026-01-30 05:47:57.155009 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 05:47:57.155044 | orchestrator | Friday 30 January 2026 05:46:41 +0000 (0:00:00.571) 0:03:26.981 ******** 2026-01-30 05:47:57.155063 | orchestrator | 2026-01-30 05:47:57.155081 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 05:47:57.155099 | orchestrator | Friday 30 January 2026 05:46:41 +0000 (0:00:00.075) 0:03:27.056 ******** 2026-01-30 05:47:57.155116 | orchestrator | 2026-01-30 05:47:57.155135 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-30 05:47:57.155155 | orchestrator | Friday 30 January 2026 05:46:41 +0000 (0:00:00.072) 0:03:27.129 ******** 2026-01-30 05:47:57.155174 | orchestrator | 2026-01-30 05:47:57.155193 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-30 05:47:57.155212 | orchestrator | Friday 30 January 2026 05:46:42 +0000 (0:00:00.270) 0:03:27.399 ******** 2026-01-30 05:47:57.155232 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:47:57.155251 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:47:57.155270 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:47:57.155288 | orchestrator | 2026-01-30 05:47:57.155307 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-30 05:47:57.155324 | orchestrator | Friday 30 January 2026 05:46:58 +0000 (0:00:15.868) 0:03:43.268 ******** 2026-01-30 05:47:57.155341 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:47:57.155359 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:47:57.155378 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:47:57.155397 | orchestrator | 2026-01-30 05:47:57.155416 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-01-30 05:47:57.155435 | orchestrator | Friday 30 January 2026 05:47:13 +0000 (0:00:15.316) 0:03:58.585 ******** 2026-01-30 05:47:57.155451 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-01-30 05:47:57.155463 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-01-30 05:47:57.155474 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-01-30 05:47:57.155484 | orchestrator | 2026-01-30 05:47:57.155495 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-30 05:47:57.155506 | orchestrator | Friday 30 January 2026 05:47:28 +0000 (0:00:14.904) 0:04:13.489 ******** 2026-01-30 05:47:57.155517 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:47:57.155527 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:47:57.155538 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:47:57.155548 | orchestrator | 2026-01-30 05:47:57.155559 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-30 05:47:57.155570 | orchestrator | Friday 30 January 2026 05:47:44 +0000 (0:00:16.251) 0:04:29.740 ******** 2026-01-30 05:47:57.155581 | orchestrator | Pausing for 5 seconds 2026-01-30 05:47:57.155592 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:47:57.155603 | orchestrator | 2026-01-30 05:47:57.155613 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-30 05:47:57.155624 | orchestrator | Friday 30 January 2026 05:47:49 +0000 (0:00:05.176) 0:04:34.916 ******** 2026-01-30 05:47:57.155635 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:47:57.155646 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:47:57.155656 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:47:57.155668 | orchestrator | 2026-01-30 05:47:57.155679 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-30 05:47:57.155710 | orchestrator | Friday 30 January 2026 05:47:50 +0000 (0:00:00.876) 0:04:35.793 ******** 2026-01-30 05:47:57.155722 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:47:57.155733 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:47:57.155743 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:47:57.155754 | orchestrator | 2026-01-30 05:47:57.155765 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-30 05:47:57.155775 | orchestrator | Friday 30 January 2026 05:47:51 +0000 (0:00:00.711) 0:04:36.504 ******** 2026-01-30 05:47:57.155786 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:47:57.155808 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:47:57.155853 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:47:57.155865 | orchestrator | 2026-01-30 05:47:57.155876 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-30 05:47:57.155887 | orchestrator | Friday 30 January 2026 05:47:52 +0000 (0:00:00.821) 0:04:37.325 ******** 2026-01-30 05:47:57.155897 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:47:57.155908 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:47:57.155919 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:47:57.155929 | orchestrator | 2026-01-30 05:47:57.155940 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-30 05:47:57.155951 | orchestrator | Friday 30 January 2026 05:47:52 +0000 (0:00:00.708) 0:04:38.034 ******** 2026-01-30 05:47:57.155962 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:47:57.155972 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:47:57.155983 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:47:57.155994 | orchestrator | 2026-01-30 05:47:57.156004 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-30 05:47:57.156015 | orchestrator | Friday 30 January 2026 05:47:53 +0000 (0:00:00.797) 0:04:38.831 ******** 2026-01-30 05:47:57.156026 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:47:57.156037 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:47:57.156047 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:47:57.156058 | orchestrator | 2026-01-30 05:47:57.156069 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-01-30 05:47:57.156090 | orchestrator | Friday 30 January 2026 05:47:54 +0000 (0:00:00.800) 0:04:39.632 ******** 2026-01-30 05:47:57.156101 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-01-30 05:47:57.156112 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-01-30 05:47:57.156123 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-01-30 05:47:57.156133 | orchestrator | 2026-01-30 05:47:57.156144 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 05:47:57.156156 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-30 05:47:57.156169 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-30 05:47:57.156180 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-30 05:47:57.156190 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:47:57.156201 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:47:57.156212 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 05:47:57.156222 | orchestrator | 2026-01-30 05:47:57.156233 | orchestrator | 2026-01-30 05:47:57.156272 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 05:47:57.156295 | orchestrator | Friday 30 January 2026 05:47:57 +0000 (0:00:02.651) 0:04:42.284 ******** 2026-01-30 05:47:57.156317 | orchestrator | =============================================================================== 2026-01-30 05:47:57.156328 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.25s 2026-01-30 05:47:57.156339 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.05s 2026-01-30 05:47:57.156350 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.25s 2026-01-30 05:47:57.156361 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.87s 2026-01-30 05:47:57.156372 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.32s 2026-01-30 05:47:57.156391 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 14.90s 2026-01-30 05:47:57.156402 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.26s 2026-01-30 05:47:57.156413 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.18s 2026-01-30 05:47:57.156424 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.69s 2026-01-30 05:47:57.156434 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.16s 2026-01-30 05:47:57.156445 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.77s 2026-01-30 05:47:57.156460 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.65s 2026-01-30 05:47:57.156480 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.42s 2026-01-30 05:47:57.156498 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.91s 2026-01-30 05:47:57.156517 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.88s 2026-01-30 05:47:57.156536 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.75s 2026-01-30 05:47:57.156568 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.50s 2026-01-30 05:47:57.535621 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.49s 2026-01-30 05:47:57.535737 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.47s 2026-01-30 05:47:57.535754 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.45s 2026-01-30 05:47:57.839046 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-30 05:47:57.839168 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-30 05:47:57.839194 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-01-30 05:47:57.848138 | orchestrator | + set -e 2026-01-30 05:47:57.848226 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 05:47:57.848240 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 05:47:57.848252 | orchestrator | ++ INTERACTIVE=false 2026-01-30 05:47:57.848263 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 05:47:57.848274 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 05:47:57.848285 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-01-30 05:47:59.999468 | orchestrator | 2026-01-30 05:47:59 | INFO  | Task fdefdd18-f36d-4109-9bdf-7f405039fa20 (ceph-rolling_update) was prepared for execution. 2026-01-30 05:47:59.999572 | orchestrator | 2026-01-30 05:47:59 | INFO  | It takes a moment until task fdefdd18-f36d-4109-9bdf-7f405039fa20 (ceph-rolling_update) has been started and output is visible here. 2026-01-30 05:49:24.150357 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-30 05:49:24.150496 | orchestrator | 2.16.14 2026-01-30 05:49:24.150527 | orchestrator | 2026-01-30 05:49:24.150545 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-01-30 05:49:24.150558 | orchestrator | 2026-01-30 05:49:24.150570 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-01-30 05:49:24.150598 | orchestrator | Friday 30 January 2026 05:48:08 +0000 (0:00:01.866) 0:00:01.866 ******** 2026-01-30 05:49:24.150610 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-01-30 05:49:24.150622 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-01-30 05:49:24.150633 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-01-30 05:49:24.150645 | orchestrator | skipping: [localhost] 2026-01-30 05:49:24.150656 | orchestrator | 2026-01-30 05:49:24.150667 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-01-30 05:49:24.150678 | orchestrator | 2026-01-30 05:49:24.150689 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-01-30 05:49:24.150700 | orchestrator | Friday 30 January 2026 05:48:10 +0000 (0:00:02.026) 0:00:03.893 ******** 2026-01-30 05:49:24.150711 | orchestrator | ok: [testbed-node-0] => { 2026-01-30 05:49:24.150746 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.150758 | orchestrator | } 2026-01-30 05:49:24.150770 | orchestrator | ok: [testbed-node-1] => { 2026-01-30 05:49:24.150781 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.150792 | orchestrator | } 2026-01-30 05:49:24.150802 | orchestrator | ok: [testbed-node-2] => { 2026-01-30 05:49:24.150813 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.150824 | orchestrator | } 2026-01-30 05:49:24.150928 | orchestrator | ok: [testbed-node-3] => { 2026-01-30 05:49:24.150944 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.150957 | orchestrator | } 2026-01-30 05:49:24.150969 | orchestrator | ok: [testbed-node-4] => { 2026-01-30 05:49:24.150982 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.150994 | orchestrator | } 2026-01-30 05:49:24.151006 | orchestrator | ok: [testbed-node-5] => { 2026-01-30 05:49:24.151019 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.151032 | orchestrator | } 2026-01-30 05:49:24.151044 | orchestrator | ok: [testbed-manager] => { 2026-01-30 05:49:24.151057 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-01-30 05:49:24.151070 | orchestrator | } 2026-01-30 05:49:24.151082 | orchestrator | 2026-01-30 05:49:24.151094 | orchestrator | TASK [Gather facts] ************************************************************ 2026-01-30 05:49:24.151106 | orchestrator | Friday 30 January 2026 05:48:14 +0000 (0:00:04.464) 0:00:08.357 ******** 2026-01-30 05:49:24.151119 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:24.151131 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:24.151143 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:24.151156 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:24.151168 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:24.151180 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:24.151193 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.151205 | orchestrator | 2026-01-30 05:49:24.151217 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-01-30 05:49:24.151230 | orchestrator | Friday 30 January 2026 05:48:20 +0000 (0:00:05.780) 0:00:14.138 ******** 2026-01-30 05:49:24.151241 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:49:24.151252 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:49:24.151263 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:49:24.151273 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:49:24.151284 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:49:24.151295 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:49:24.151306 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:49:24.151317 | orchestrator | 2026-01-30 05:49:24.151328 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-01-30 05:49:24.151339 | orchestrator | Friday 30 January 2026 05:48:52 +0000 (0:00:32.420) 0:00:46.558 ******** 2026-01-30 05:49:24.151350 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.151361 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.151371 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.151382 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.151393 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.151404 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.151415 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.151426 | orchestrator | 2026-01-30 05:49:24.151437 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 05:49:24.151448 | orchestrator | Friday 30 January 2026 05:48:54 +0000 (0:00:02.029) 0:00:48.588 ******** 2026-01-30 05:49:24.151469 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-30 05:49:24.151482 | orchestrator | 2026-01-30 05:49:24.151493 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 05:49:24.151504 | orchestrator | Friday 30 January 2026 05:48:57 +0000 (0:00:02.736) 0:00:51.325 ******** 2026-01-30 05:49:24.151515 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.151526 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.151544 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.151564 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.151583 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.151602 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.151621 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.151641 | orchestrator | 2026-01-30 05:49:24.151711 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 05:49:24.151734 | orchestrator | Friday 30 January 2026 05:49:00 +0000 (0:00:02.667) 0:00:53.993 ******** 2026-01-30 05:49:24.151754 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.151775 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.151794 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.151815 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.151864 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.151883 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.151914 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.151934 | orchestrator | 2026-01-30 05:49:24.151950 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 05:49:24.151968 | orchestrator | Friday 30 January 2026 05:49:02 +0000 (0:00:01.903) 0:00:55.896 ******** 2026-01-30 05:49:24.151984 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152001 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152016 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152031 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152048 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152064 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152081 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152097 | orchestrator | 2026-01-30 05:49:24.152114 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 05:49:24.152132 | orchestrator | Friday 30 January 2026 05:49:05 +0000 (0:00:02.820) 0:00:58.717 ******** 2026-01-30 05:49:24.152149 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152167 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152185 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152203 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152222 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152241 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152258 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152270 | orchestrator | 2026-01-30 05:49:24.152281 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 05:49:24.152292 | orchestrator | Friday 30 January 2026 05:49:07 +0000 (0:00:02.095) 0:01:00.812 ******** 2026-01-30 05:49:24.152302 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152313 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152324 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152334 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152345 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152355 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152366 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152377 | orchestrator | 2026-01-30 05:49:24.152387 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 05:49:24.152398 | orchestrator | Friday 30 January 2026 05:49:09 +0000 (0:00:02.244) 0:01:03.056 ******** 2026-01-30 05:49:24.152409 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152419 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152430 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152453 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152464 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152475 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152485 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152496 | orchestrator | 2026-01-30 05:49:24.152507 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 05:49:24.152517 | orchestrator | Friday 30 January 2026 05:49:11 +0000 (0:00:01.853) 0:01:04.910 ******** 2026-01-30 05:49:24.152528 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:24.152544 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:24.152562 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:24.152582 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:24.152602 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:24.152621 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:24.152636 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:24.152647 | orchestrator | 2026-01-30 05:49:24.152658 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 05:49:24.152668 | orchestrator | Friday 30 January 2026 05:49:13 +0000 (0:00:01.970) 0:01:06.881 ******** 2026-01-30 05:49:24.152679 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152690 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152700 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152711 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152721 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152732 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152743 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152753 | orchestrator | 2026-01-30 05:49:24.152764 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 05:49:24.152775 | orchestrator | Friday 30 January 2026 05:49:15 +0000 (0:00:01.914) 0:01:08.796 ******** 2026-01-30 05:49:24.152786 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:49:24.152797 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:49:24.152807 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:49:24.152818 | orchestrator | 2026-01-30 05:49:24.152829 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 05:49:24.152868 | orchestrator | Friday 30 January 2026 05:49:16 +0000 (0:00:01.640) 0:01:10.436 ******** 2026-01-30 05:49:24.152878 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:24.152889 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:24.152900 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:24.152910 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:24.152921 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:24.152931 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:24.152942 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:24.152952 | orchestrator | 2026-01-30 05:49:24.152963 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 05:49:24.152974 | orchestrator | Friday 30 January 2026 05:49:19 +0000 (0:00:02.414) 0:01:12.851 ******** 2026-01-30 05:49:24.152984 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:49:24.152995 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:49:24.153005 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:49:24.153016 | orchestrator | 2026-01-30 05:49:24.153027 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 05:49:24.153037 | orchestrator | Friday 30 January 2026 05:49:22 +0000 (0:00:03.502) 0:01:16.354 ******** 2026-01-30 05:49:24.153064 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:49:46.871960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:49:46.872096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:49:46.872117 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.872131 | orchestrator | 2026-01-30 05:49:46.872175 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 05:49:46.872207 | orchestrator | Friday 30 January 2026 05:49:24 +0000 (0:00:01.394) 0:01:17.749 ******** 2026-01-30 05:49:46.872222 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872239 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872254 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872267 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.872281 | orchestrator | 2026-01-30 05:49:46.872294 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 05:49:46.872308 | orchestrator | Friday 30 January 2026 05:49:26 +0000 (0:00:01.931) 0:01:19.681 ******** 2026-01-30 05:49:46.872325 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872342 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872356 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:46.872370 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.872384 | orchestrator | 2026-01-30 05:49:46.872398 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 05:49:46.872412 | orchestrator | Friday 30 January 2026 05:49:27 +0000 (0:00:01.179) 0:01:20.860 ******** 2026-01-30 05:49:46.872430 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9b4b4ef35663', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 05:49:19.937593', 'end': '2026-01-30 05:49:20.056474', 'delta': '0:00:00.118881', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9b4b4ef35663'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 05:49:46.872473 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b97e426bfe4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 05:49:20.975661', 'end': '2026-01-30 05:49:21.032033', 'delta': '0:00:00.056372', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b97e426bfe4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:49:46.872499 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1f4acb9ff46e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 05:49:21.550846', 'end': '2026-01-30 05:49:21.601579', 'delta': '0:00:00.050733', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f4acb9ff46e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:49:46.872514 | orchestrator | 2026-01-30 05:49:46.872529 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 05:49:46.872544 | orchestrator | Friday 30 January 2026 05:49:28 +0000 (0:00:01.180) 0:01:22.041 ******** 2026-01-30 05:49:46.872577 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:46.872603 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:46.872616 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:46.872630 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:46.872643 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:46.872657 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:46.872671 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:46.872686 | orchestrator | 2026-01-30 05:49:46.872700 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 05:49:46.872714 | orchestrator | Friday 30 January 2026 05:49:30 +0000 (0:00:02.234) 0:01:24.275 ******** 2026-01-30 05:49:46.872727 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.872741 | orchestrator | 2026-01-30 05:49:46.872754 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 05:49:46.872768 | orchestrator | Friday 30 January 2026 05:49:31 +0000 (0:00:01.253) 0:01:25.529 ******** 2026-01-30 05:49:46.872780 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:46.872792 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:46.872806 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:46.872849 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:46.872863 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:46.872877 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:46.872891 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:46.872904 | orchestrator | 2026-01-30 05:49:46.872917 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 05:49:46.872931 | orchestrator | Friday 30 January 2026 05:49:34 +0000 (0:00:02.237) 0:01:27.766 ******** 2026-01-30 05:49:46.872939 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:46.872986 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.872995 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.873003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.873011 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.873018 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.873027 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-30 05:49:46.873035 | orchestrator | 2026-01-30 05:49:46.873043 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:49:46.873051 | orchestrator | Friday 30 January 2026 05:49:37 +0000 (0:00:03.561) 0:01:31.328 ******** 2026-01-30 05:49:46.873067 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:49:46.873075 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:49:46.873083 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:49:46.873091 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:49:46.873098 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:49:46.873106 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:49:46.873114 | orchestrator | ok: [testbed-manager] 2026-01-30 05:49:46.873122 | orchestrator | 2026-01-30 05:49:46.873129 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 05:49:46.873137 | orchestrator | Friday 30 January 2026 05:49:39 +0000 (0:00:02.275) 0:01:33.603 ******** 2026-01-30 05:49:46.873145 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.873153 | orchestrator | 2026-01-30 05:49:46.873160 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 05:49:46.873168 | orchestrator | Friday 30 January 2026 05:49:41 +0000 (0:00:01.148) 0:01:34.752 ******** 2026-01-30 05:49:46.873176 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.873184 | orchestrator | 2026-01-30 05:49:46.873191 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:49:46.873199 | orchestrator | Friday 30 January 2026 05:49:42 +0000 (0:00:01.207) 0:01:35.960 ******** 2026-01-30 05:49:46.873207 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.873214 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:46.873222 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:46.873230 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:46.873237 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:46.873245 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:46.873253 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:46.873261 | orchestrator | 2026-01-30 05:49:46.873268 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 05:49:46.873276 | orchestrator | Friday 30 January 2026 05:49:44 +0000 (0:00:02.533) 0:01:38.493 ******** 2026-01-30 05:49:46.873284 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:46.873291 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:46.873299 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:46.873307 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:46.873315 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:46.873322 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:46.873340 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358157 | orchestrator | 2026-01-30 05:49:57.358264 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 05:49:57.358276 | orchestrator | Friday 30 January 2026 05:49:46 +0000 (0:00:01.975) 0:01:40.469 ******** 2026-01-30 05:49:57.358284 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.358292 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.358313 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.358324 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:57.358336 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:57.358348 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:57.358367 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358382 | orchestrator | 2026-01-30 05:49:57.358398 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 05:49:57.358409 | orchestrator | Friday 30 January 2026 05:49:48 +0000 (0:00:02.135) 0:01:42.605 ******** 2026-01-30 05:49:57.358421 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.358433 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.358444 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.358455 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:57.358467 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:57.358478 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:57.358489 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358502 | orchestrator | 2026-01-30 05:49:57.358514 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 05:49:57.358527 | orchestrator | Friday 30 January 2026 05:49:50 +0000 (0:00:01.952) 0:01:44.557 ******** 2026-01-30 05:49:57.358561 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.358574 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.358585 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.358596 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:57.358608 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:57.358620 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:57.358633 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358645 | orchestrator | 2026-01-30 05:49:57.358658 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 05:49:57.358671 | orchestrator | Friday 30 January 2026 05:49:53 +0000 (0:00:02.070) 0:01:46.628 ******** 2026-01-30 05:49:57.358683 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.358695 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.358706 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.358716 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:57.358727 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:57.358738 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:57.358750 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358761 | orchestrator | 2026-01-30 05:49:57.358773 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 05:49:57.358787 | orchestrator | Friday 30 January 2026 05:49:54 +0000 (0:00:01.930) 0:01:48.558 ******** 2026-01-30 05:49:57.358819 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.358833 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.358846 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.358859 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:57.358872 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:57.358882 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:57.358894 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:57.358905 | orchestrator | 2026-01-30 05:49:57.358917 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 05:49:57.358929 | orchestrator | Friday 30 January 2026 05:49:57 +0000 (0:00:02.246) 0:01:50.805 ******** 2026-01-30 05:49:57.358944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.358960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.358972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.359009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:57.359057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.359070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.359081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.359096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:57.359109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.359143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:57.619401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:57.619530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619551 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:57.619563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.619601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:57.619624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958163 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:49:57.958173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:57.958196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}})  2026-01-30 05:49:57.958235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:57.958241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}})  2026-01-30 05:49:57.958246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.958258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:57.958263 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:49:57.958273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}})  2026-01-30 05:49:57.965456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}})  2026-01-30 05:49:57.965498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:57.965633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:57.965695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}})  2026-01-30 05:49:57.965724 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:58.227221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}})  2026-01-30 05:49:58.227312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227323 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:58.227358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227365 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:49:58.227373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}})  2026-01-30 05:49:58.227422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}})  2026-01-30 05:49:58.227428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.227441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:58.227457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.345926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.345996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}})  2026-01-30 05:49:58.346068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:58.346075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}})  2026-01-30 05:49:58.346091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:58.346126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346145 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:58.346152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}})  2026-01-30 05:49:58.346162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}})  2026-01-30 05:49:58.346175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:59.633623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633660 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:49:59.633668 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:49:59.633674 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633692 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633710 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:49:59.633716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.633749 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd146c94a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:49:59.767886 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.768000 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:49:59.768025 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:49:59.768039 | orchestrator | 2026-01-30 05:49:59.768051 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 05:49:59.768068 | orchestrator | Friday 30 January 2026 05:49:59 +0000 (0:00:02.424) 0:01:53.229 ******** 2026-01-30 05:49:59.768089 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768112 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768153 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768234 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768247 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.768323 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.924874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.924971 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:49:59.924987 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.924998 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925025 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925037 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925068 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925096 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925107 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925126 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925146 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:49:59.925163 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197009 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:00.197101 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197113 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197119 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197142 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197169 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197198 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197213 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197226 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197232 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.197238 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:00.197251 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347172 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347367 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347379 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.347394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430332 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430466 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.430479 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545375 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545529 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545560 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545611 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.545668 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747730 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747929 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747965 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.747985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.748007 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.748029 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.748048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.748084 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:00.748118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775206 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:00.775338 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775392 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775412 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775478 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775536 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775556 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775605 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775626 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:00.775686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932306 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932452 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd146c94a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part16', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part14', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part15', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part1', 'scsi-SQEMU_QEMU_HARDDISK_d146c94a-adac-4c27-b0d5-e5e0f56c9da7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932532 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932571 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932587 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932624 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:13.932642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:50:13.932656 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:13.932670 | orchestrator | 2026-01-30 05:50:13.932684 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 05:50:13.932700 | orchestrator | Friday 30 January 2026 05:50:01 +0000 (0:00:02.332) 0:01:55.562 ******** 2026-01-30 05:50:13.932714 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:50:13.932728 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:50:13.932736 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:50:13.932744 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:50:13.932751 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:50:13.932758 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:50:13.932765 | orchestrator | ok: [testbed-manager] 2026-01-30 05:50:13.932772 | orchestrator | 2026-01-30 05:50:13.932779 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 05:50:13.932839 | orchestrator | Friday 30 January 2026 05:50:04 +0000 (0:00:02.505) 0:01:58.067 ******** 2026-01-30 05:50:13.932849 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:50:13.932858 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:50:13.932873 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:50:13.932882 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:50:13.932890 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:50:13.932898 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:50:13.932906 | orchestrator | ok: [testbed-manager] 2026-01-30 05:50:13.932914 | orchestrator | 2026-01-30 05:50:13.932923 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:50:13.932932 | orchestrator | Friday 30 January 2026 05:50:06 +0000 (0:00:01.871) 0:01:59.939 ******** 2026-01-30 05:50:13.932991 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:50:13.933000 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:50:13.933008 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:50:13.933016 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:50:13.933057 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:13.933066 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:50:13.933074 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:50:13.933082 | orchestrator | 2026-01-30 05:50:13.933090 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:50:13.933144 | orchestrator | Friday 30 January 2026 05:50:09 +0000 (0:00:02.706) 0:02:02.645 ******** 2026-01-30 05:50:13.933155 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:50:13.933164 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:13.933172 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:13.933179 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:13.933187 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:13.933193 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:13.933201 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:13.933208 | orchestrator | 2026-01-30 05:50:13.933215 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:50:13.933231 | orchestrator | Friday 30 January 2026 05:50:11 +0000 (0:00:02.147) 0:02:04.793 ******** 2026-01-30 05:50:13.933238 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:50:13.933245 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:13.933253 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:13.933260 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:13.933277 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240121 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240210 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-01-30 05:50:41.240220 | orchestrator | 2026-01-30 05:50:41.240228 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:50:41.240235 | orchestrator | Friday 30 January 2026 05:50:13 +0000 (0:00:02.734) 0:02:07.528 ******** 2026-01-30 05:50:41.240242 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:50:41.240248 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:41.240254 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:41.240261 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240267 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240273 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240279 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:41.240285 | orchestrator | 2026-01-30 05:50:41.240292 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 05:50:41.240298 | orchestrator | Friday 30 January 2026 05:50:16 +0000 (0:00:02.167) 0:02:09.696 ******** 2026-01-30 05:50:41.240305 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:50:41.240311 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-30 05:50:41.240317 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 05:50:41.240323 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:50:41.240329 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-30 05:50:41.240335 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 05:50:41.240341 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 05:50:41.240347 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-30 05:50:41.240353 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 05:50:41.240359 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-30 05:50:41.240365 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 05:50:41.240371 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 05:50:41.240377 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 05:50:41.240383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-30 05:50:41.240389 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 05:50:41.240395 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 05:50:41.240401 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 05:50:41.240407 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-30 05:50:41.240413 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 05:50:41.240419 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 05:50:41.240425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-30 05:50:41.240431 | orchestrator | 2026-01-30 05:50:41.240438 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 05:50:41.240444 | orchestrator | Friday 30 January 2026 05:50:19 +0000 (0:00:03.267) 0:02:12.963 ******** 2026-01-30 05:50:41.240450 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:50:41.240456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:50:41.240462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:50:41.240468 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:50:41.240474 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 05:50:41.240501 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 05:50:41.240508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 05:50:41.240514 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:41.240520 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 05:50:41.240526 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 05:50:41.240533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 05:50:41.240539 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:41.240556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 05:50:41.240562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 05:50:41.240568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 05:50:41.240574 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240580 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 05:50:41.240586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 05:50:41.240592 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 05:50:41.240598 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240605 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 05:50:41.240611 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 05:50:41.240617 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 05:50:41.240623 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240629 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-30 05:50:41.240635 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-30 05:50:41.240641 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-30 05:50:41.240647 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:41.240653 | orchestrator | 2026-01-30 05:50:41.240660 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 05:50:41.240667 | orchestrator | Friday 30 January 2026 05:50:21 +0000 (0:00:01.909) 0:02:14.873 ******** 2026-01-30 05:50:41.240674 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:50:41.240682 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:50:41.240689 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:50:41.240696 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:50:41.240715 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:50:41.240723 | orchestrator | 2026-01-30 05:50:41.240731 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 05:50:41.240739 | orchestrator | Friday 30 January 2026 05:50:22 +0000 (0:00:01.725) 0:02:16.599 ******** 2026-01-30 05:50:41.240747 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240754 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240761 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240805 | orchestrator | 2026-01-30 05:50:41.240812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 05:50:41.240820 | orchestrator | Friday 30 January 2026 05:50:24 +0000 (0:00:01.439) 0:02:18.039 ******** 2026-01-30 05:50:41.240827 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240834 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240841 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240848 | orchestrator | 2026-01-30 05:50:41.240855 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 05:50:41.240862 | orchestrator | Friday 30 January 2026 05:50:25 +0000 (0:00:01.278) 0:02:19.317 ******** 2026-01-30 05:50:41.240869 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240876 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:50:41.240883 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:50:41.240896 | orchestrator | 2026-01-30 05:50:41.240903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 05:50:41.240910 | orchestrator | Friday 30 January 2026 05:50:26 +0000 (0:00:01.268) 0:02:20.585 ******** 2026-01-30 05:50:41.240917 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:50:41.240925 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:50:41.240932 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:50:41.240939 | orchestrator | 2026-01-30 05:50:41.240946 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 05:50:41.240953 | orchestrator | Friday 30 January 2026 05:50:28 +0000 (0:00:01.331) 0:02:21.917 ******** 2026-01-30 05:50:41.240960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 05:50:41.240967 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 05:50:41.240974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 05:50:41.240982 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.240989 | orchestrator | 2026-01-30 05:50:41.240996 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 05:50:41.241003 | orchestrator | Friday 30 January 2026 05:50:29 +0000 (0:00:01.619) 0:02:23.537 ******** 2026-01-30 05:50:41.241010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 05:50:41.241018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 05:50:41.241026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 05:50:41.241032 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.241038 | orchestrator | 2026-01-30 05:50:41.241044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 05:50:41.241051 | orchestrator | Friday 30 January 2026 05:50:31 +0000 (0:00:01.647) 0:02:25.184 ******** 2026-01-30 05:50:41.241057 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 05:50:41.241063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 05:50:41.241069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 05:50:41.241075 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:50:41.241081 | orchestrator | 2026-01-30 05:50:41.241087 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 05:50:41.241094 | orchestrator | Friday 30 January 2026 05:50:33 +0000 (0:00:01.644) 0:02:26.828 ******** 2026-01-30 05:50:41.241100 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:50:41.241106 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:50:41.241112 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:50:41.241118 | orchestrator | 2026-01-30 05:50:41.241124 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 05:50:41.241131 | orchestrator | Friday 30 January 2026 05:50:34 +0000 (0:00:01.391) 0:02:28.220 ******** 2026-01-30 05:50:41.241137 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 05:50:41.241143 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 05:50:41.241150 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 05:50:41.241156 | orchestrator | 2026-01-30 05:50:41.241162 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 05:50:41.241168 | orchestrator | Friday 30 January 2026 05:50:36 +0000 (0:00:01.516) 0:02:29.736 ******** 2026-01-30 05:50:41.241174 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:50:41.241181 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:50:41.241188 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:50:41.241194 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:50:41.241200 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:50:41.241206 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:50:41.241212 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:50:41.241222 | orchestrator | 2026-01-30 05:50:41.241229 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 05:50:41.241235 | orchestrator | Friday 30 January 2026 05:50:38 +0000 (0:00:02.139) 0:02:31.876 ******** 2026-01-30 05:50:41.241241 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:50:41.241247 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:50:41.241253 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:50:41.241264 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:51:27.971264 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:51:27.971360 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:51:27.971371 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:51:27.971381 | orchestrator | 2026-01-30 05:51:27.971390 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-01-30 05:51:27.971400 | orchestrator | Friday 30 January 2026 05:50:41 +0000 (0:00:02.954) 0:02:34.831 ******** 2026-01-30 05:51:27.971408 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:51:27.971417 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:51:27.971425 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:51:27.971433 | orchestrator | changed: [testbed-manager] 2026-01-30 05:51:27.971441 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:51:27.971448 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:51:27.971456 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:51:27.971464 | orchestrator | 2026-01-30 05:51:27.971472 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-01-30 05:51:27.971480 | orchestrator | Friday 30 January 2026 05:50:53 +0000 (0:00:11.796) 0:02:46.628 ******** 2026-01-30 05:51:27.971488 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.971496 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.971504 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.971512 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.971520 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.971527 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.971535 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.971543 | orchestrator | 2026-01-30 05:51:27.971551 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-01-30 05:51:27.971559 | orchestrator | Friday 30 January 2026 05:50:55 +0000 (0:00:02.143) 0:02:48.771 ******** 2026-01-30 05:51:27.971566 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.971574 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.971582 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.971590 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.971597 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.971605 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.971613 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.971620 | orchestrator | 2026-01-30 05:51:27.971628 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-01-30 05:51:27.971636 | orchestrator | Friday 30 January 2026 05:50:57 +0000 (0:00:01.892) 0:02:50.664 ******** 2026-01-30 05:51:27.971644 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.971652 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:51:27.971659 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:51:27.971667 | orchestrator | changed: [testbed-node-2] 2026-01-30 05:51:27.971675 | orchestrator | changed: [testbed-node-3] 2026-01-30 05:51:27.971683 | orchestrator | changed: [testbed-node-4] 2026-01-30 05:51:27.971690 | orchestrator | changed: [testbed-node-5] 2026-01-30 05:51:27.971698 | orchestrator | 2026-01-30 05:51:27.971706 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-01-30 05:51:27.971734 | orchestrator | Friday 30 January 2026 05:51:00 +0000 (0:00:03.089) 0:02:53.754 ******** 2026-01-30 05:51:27.971811 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-30 05:51:27.971830 | orchestrator | 2026-01-30 05:51:27.971841 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-01-30 05:51:27.971850 | orchestrator | Friday 30 January 2026 05:51:03 +0000 (0:00:03.003) 0:02:56.758 ******** 2026-01-30 05:51:27.971859 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.971868 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.971877 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.971885 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.971894 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.971908 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.971917 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.971926 | orchestrator | 2026-01-30 05:51:27.971934 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-01-30 05:51:27.971944 | orchestrator | Friday 30 January 2026 05:51:05 +0000 (0:00:01.878) 0:02:58.636 ******** 2026-01-30 05:51:27.971953 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.971962 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.971971 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.971979 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.971988 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.971996 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972005 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972014 | orchestrator | 2026-01-30 05:51:27.972023 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-01-30 05:51:27.972032 | orchestrator | Friday 30 January 2026 05:51:07 +0000 (0:00:02.168) 0:03:00.805 ******** 2026-01-30 05:51:27.972041 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972050 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972058 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972067 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972075 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972084 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972093 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972102 | orchestrator | 2026-01-30 05:51:27.972111 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-01-30 05:51:27.972120 | orchestrator | Friday 30 January 2026 05:51:09 +0000 (0:00:02.025) 0:03:02.830 ******** 2026-01-30 05:51:27.972128 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972137 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972146 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972155 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972164 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972173 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972181 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972189 | orchestrator | 2026-01-30 05:51:27.972211 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-01-30 05:51:27.972220 | orchestrator | Friday 30 January 2026 05:51:11 +0000 (0:00:02.275) 0:03:05.105 ******** 2026-01-30 05:51:27.972228 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972235 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972243 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972251 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972258 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972266 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972273 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972281 | orchestrator | 2026-01-30 05:51:27.972289 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-01-30 05:51:27.972305 | orchestrator | Friday 30 January 2026 05:51:13 +0000 (0:00:02.115) 0:03:07.221 ******** 2026-01-30 05:51:27.972313 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972320 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972328 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972336 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972344 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972352 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972359 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972367 | orchestrator | 2026-01-30 05:51:27.972375 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-01-30 05:51:27.972383 | orchestrator | Friday 30 January 2026 05:51:15 +0000 (0:00:02.107) 0:03:09.328 ******** 2026-01-30 05:51:27.972391 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972398 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972406 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972414 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972421 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972429 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972436 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972444 | orchestrator | 2026-01-30 05:51:27.972452 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-01-30 05:51:27.972460 | orchestrator | Friday 30 January 2026 05:51:17 +0000 (0:00:01.861) 0:03:11.190 ******** 2026-01-30 05:51:27.972468 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972475 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972483 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972491 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972498 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972506 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972514 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972521 | orchestrator | 2026-01-30 05:51:27.972529 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-01-30 05:51:27.972537 | orchestrator | Friday 30 January 2026 05:51:19 +0000 (0:00:02.047) 0:03:13.237 ******** 2026-01-30 05:51:27.972545 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972553 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972560 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972568 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972576 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972583 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972591 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972599 | orchestrator | 2026-01-30 05:51:27.972606 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-01-30 05:51:27.972614 | orchestrator | Friday 30 January 2026 05:51:21 +0000 (0:00:01.823) 0:03:15.061 ******** 2026-01-30 05:51:27.972622 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972630 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972637 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972645 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972653 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972660 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972668 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972676 | orchestrator | 2026-01-30 05:51:27.972683 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-01-30 05:51:27.972695 | orchestrator | Friday 30 January 2026 05:51:23 +0000 (0:00:01.746) 0:03:16.808 ******** 2026-01-30 05:51:27.972703 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972711 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972719 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972726 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972734 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972747 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972755 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972763 | orchestrator | 2026-01-30 05:51:27.972770 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-01-30 05:51:27.972796 | orchestrator | Friday 30 January 2026 05:51:25 +0000 (0:00:02.040) 0:03:18.848 ******** 2026-01-30 05:51:27.972804 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972812 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972820 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972827 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972835 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:27.972843 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:27.972850 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:27.972858 | orchestrator | 2026-01-30 05:51:27.972866 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-01-30 05:51:27.972873 | orchestrator | Friday 30 January 2026 05:51:27 +0000 (0:00:01.882) 0:03:20.730 ******** 2026-01-30 05:51:27.972881 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:27.972889 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:27.972897 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:27.972905 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 05:51:27.972914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 05:51:27.972922 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:27.972935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 05:51:52.476981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 05:51:52.477065 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 05:51:52.477080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 05:51:52.477086 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477092 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477098 | orchestrator | 2026-01-30 05:51:52.477106 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-01-30 05:51:52.477113 | orchestrator | Friday 30 January 2026 05:51:29 +0000 (0:00:02.198) 0:03:22.929 ******** 2026-01-30 05:51:52.477119 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477124 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477130 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477136 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477141 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477147 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477153 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477158 | orchestrator | 2026-01-30 05:51:52.477164 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-01-30 05:51:52.477170 | orchestrator | Friday 30 January 2026 05:51:31 +0000 (0:00:02.063) 0:03:24.993 ******** 2026-01-30 05:51:52.477176 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477181 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477187 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477193 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477198 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477204 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477231 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477237 | orchestrator | 2026-01-30 05:51:52.477243 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-01-30 05:51:52.477249 | orchestrator | Friday 30 January 2026 05:51:33 +0000 (0:00:02.093) 0:03:27.086 ******** 2026-01-30 05:51:52.477254 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477260 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477266 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477271 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477277 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477283 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477288 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477294 | orchestrator | 2026-01-30 05:51:52.477300 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-01-30 05:51:52.477306 | orchestrator | Friday 30 January 2026 05:51:35 +0000 (0:00:02.093) 0:03:29.180 ******** 2026-01-30 05:51:52.477311 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477317 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477323 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477328 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477334 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477340 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477346 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477351 | orchestrator | 2026-01-30 05:51:52.477357 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-01-30 05:51:52.477363 | orchestrator | Friday 30 January 2026 05:51:37 +0000 (0:00:01.903) 0:03:31.084 ******** 2026-01-30 05:51:52.477369 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477374 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477390 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477396 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477402 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477407 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477413 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477419 | orchestrator | 2026-01-30 05:51:52.477424 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-01-30 05:51:52.477430 | orchestrator | Friday 30 January 2026 05:51:39 +0000 (0:00:02.136) 0:03:33.220 ******** 2026-01-30 05:51:52.477436 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477442 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477447 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477453 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477459 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477465 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477470 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477476 | orchestrator | 2026-01-30 05:51:52.477482 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-01-30 05:51:52.477487 | orchestrator | Friday 30 January 2026 05:51:41 +0000 (0:00:01.885) 0:03:35.106 ******** 2026-01-30 05:51:52.477493 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:51:52.477499 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:51:52.477505 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:51:52.477510 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:51:52.477517 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:51:52.477523 | orchestrator | 2026-01-30 05:51:52.477529 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-01-30 05:51:52.477534 | orchestrator | Friday 30 January 2026 05:51:43 +0000 (0:00:02.409) 0:03:37.516 ******** 2026-01-30 05:51:52.477541 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:51:52.477549 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:51:52.477556 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:51:52.477563 | orchestrator | 2026-01-30 05:51:52.477574 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-01-30 05:51:52.477581 | orchestrator | Friday 30 January 2026 05:51:45 +0000 (0:00:01.430) 0:03:38.947 ******** 2026-01-30 05:51:52.477599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 05:51:52.477607 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 05:51:52.477614 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 05:51:52.477627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 05:51:52.477634 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477640 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 05:51:52.477647 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 05:51:52.477654 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477661 | orchestrator | 2026-01-30 05:51:52.477667 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-01-30 05:51:52.477674 | orchestrator | Friday 30 January 2026 05:51:46 +0000 (0:00:01.379) 0:03:40.326 ******** 2026-01-30 05:51:52.477683 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477698 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477722 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:52.477746 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477753 | orchestrator | 2026-01-30 05:51:52.477760 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-01-30 05:51:52.477766 | orchestrator | Friday 30 January 2026 05:51:48 +0000 (0:00:01.742) 0:03:42.068 ******** 2026-01-30 05:51:52.477773 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477779 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477805 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477815 | orchestrator | 2026-01-30 05:51:52.477824 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-01-30 05:51:52.477834 | orchestrator | Friday 30 January 2026 05:51:49 +0000 (0:00:01.329) 0:03:43.398 ******** 2026-01-30 05:51:52.477843 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477854 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:52.477863 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:52.477873 | orchestrator | 2026-01-30 05:51:52.477884 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-01-30 05:51:52.477894 | orchestrator | Friday 30 January 2026 05:51:51 +0000 (0:00:01.345) 0:03:44.744 ******** 2026-01-30 05:51:52.477903 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:52.477914 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:57.384419 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:57.384499 | orchestrator | 2026-01-30 05:51:57.384506 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-01-30 05:51:57.384512 | orchestrator | Friday 30 January 2026 05:51:52 +0000 (0:00:01.330) 0:03:46.075 ******** 2026-01-30 05:51:57.384516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:57.384520 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:57.384525 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:51:57.384529 | orchestrator | 2026-01-30 05:51:57.384534 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-01-30 05:51:57.384538 | orchestrator | Friday 30 January 2026 05:51:53 +0000 (0:00:01.352) 0:03:47.427 ******** 2026-01-30 05:51:57.384542 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}) 2026-01-30 05:51:57.384548 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}) 2026-01-30 05:51:57.384552 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}) 2026-01-30 05:51:57.384556 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}) 2026-01-30 05:51:57.384560 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}) 2026-01-30 05:51:57.384564 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}) 2026-01-30 05:51:57.384568 | orchestrator | 2026-01-30 05:51:57.384572 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-01-30 05:51:57.384577 | orchestrator | Friday 30 January 2026 05:51:55 +0000 (0:00:02.121) 0:03:49.549 ******** 2026-01-30 05:51:57.384598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0/osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1769744997.512921, 'mtime': 1769744997.508921, 'ctime': 1769744997.508921, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0/osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:57.384628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b/osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1769745018.3022714, 'mtime': 1769745018.2992713, 'ctime': 1769745018.2992713, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b/osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:57.384634 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:51:57.384639 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267/osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1769744993.7525778, 'mtime': 1769744993.748801, 'ctime': 1769744993.748801, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267/osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:57.384646 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a1704272-fd93-5be5-acd9-a48498ed5939/osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1769745014.4359286, 'mtime': 1769745014.4319286, 'ctime': 1769745014.4319286, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a1704272-fd93-5be5-acd9-a48498ed5939/osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}, 'ansible_loop_var': 'item'})  2026-01-30 05:51:57.384654 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:51:57.384661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50/osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1769744997.3588548, 'mtime': 1769744997.3558547, 'ctime': 1769744997.3558547, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50/osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd/osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1769745017.9261992, 'mtime': 1769745017.9221992, 'ctime': 1769745017.9221992, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd/osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092305 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:03.092315 | orchestrator | 2026-01-30 05:52:03.092320 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-01-30 05:52:03.092326 | orchestrator | Friday 30 January 2026 05:51:57 +0000 (0:00:01.438) 0:03:50.987 ******** 2026-01-30 05:52:03.092331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 05:52:03.092337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 05:52:03.092357 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:03.092362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 05:52:03.092366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 05:52:03.092370 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:03.092374 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 05:52:03.092388 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 05:52:03.092392 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:03.092396 | orchestrator | 2026-01-30 05:52:03.092401 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-01-30 05:52:03.092406 | orchestrator | Friday 30 January 2026 05:51:58 +0000 (0:00:01.377) 0:03:52.365 ******** 2026-01-30 05:52:03.092411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092420 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:03.092424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092441 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:03.092445 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092453 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:03.092457 | orchestrator | 2026-01-30 05:52:03.092461 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-01-30 05:52:03.092465 | orchestrator | Friday 30 January 2026 05:52:00 +0000 (0:00:01.383) 0:03:53.749 ******** 2026-01-30 05:52:03.092469 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'})  2026-01-30 05:52:03.092477 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'})  2026-01-30 05:52:03.092480 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:03.092484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'})  2026-01-30 05:52:03.092488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'})  2026-01-30 05:52:03.092492 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:03.092496 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'})  2026-01-30 05:52:03.092499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'})  2026-01-30 05:52:03.092503 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:03.092507 | orchestrator | 2026-01-30 05:52:03.092511 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-01-30 05:52:03.092515 | orchestrator | Friday 30 January 2026 05:52:01 +0000 (0:00:01.583) 0:03:55.332 ******** 2026-01-30 05:52:03.092521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0', 'data_vg': 'ceph-8ea9dc5c-1d02-5b7a-b23f-cb4648b979f0'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b', 'data_vg': 'ceph-a8f13564-aa0f-525b-b1f5-f4cdb3fdc88b'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092529 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:03.092533 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267', 'data_vg': 'ceph-3dd49c2b-59d1-5a3f-9cfa-a0fb165dd267'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092537 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a1704272-fd93-5be5-acd9-a48498ed5939', 'data_vg': 'ceph-a1704272-fd93-5be5-acd9-a48498ed5939'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092541 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:03.092545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c96ee3ed-1860-5729-adba-bbe0a3b53c50', 'data_vg': 'ceph-c96ee3ed-1860-5729-adba-bbe0a3b53c50'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:03.092552 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd', 'data_vg': 'ceph-484c5dd7-ec3c-5b7c-8938-cd2a84a156dd'}, 'ansible_loop_var': 'item'})  2026-01-30 05:52:12.061318 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:12.061419 | orchestrator | 2026-01-30 05:52:12.061435 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-01-30 05:52:12.061446 | orchestrator | Friday 30 January 2026 05:52:03 +0000 (0:00:01.356) 0:03:56.688 ******** 2026-01-30 05:52:12.061485 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:12.061495 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:12.061504 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:12.061513 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:12.061523 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:12.061532 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:12.061542 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:12.061551 | orchestrator | 2026-01-30 05:52:12.061560 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-01-30 05:52:12.061570 | orchestrator | Friday 30 January 2026 05:52:04 +0000 (0:00:01.921) 0:03:58.610 ******** 2026-01-30 05:52:12.061579 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:12.061588 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:12.061597 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:12.061607 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:12.061613 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 05:52:12.061619 | orchestrator | 2026-01-30 05:52:12.061624 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-01-30 05:52:12.061630 | orchestrator | Friday 30 January 2026 05:52:07 +0000 (0:00:02.408) 0:04:01.018 ******** 2026-01-30 05:52:12.061636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061666 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:12.061671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061717 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:12.061722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061749 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:12.061760 | orchestrator | 2026-01-30 05:52:12.061766 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-01-30 05:52:12.061771 | orchestrator | Friday 30 January 2026 05:52:08 +0000 (0:00:01.417) 0:04:02.436 ******** 2026-01-30 05:52:12.061777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061864 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:12.061871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061913 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:12.061921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.061969 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:12.061979 | orchestrator | 2026-01-30 05:52:12.061988 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-01-30 05:52:12.061998 | orchestrator | Friday 30 January 2026 05:52:10 +0000 (0:00:01.583) 0:04:04.020 ******** 2026-01-30 05:52:12.062007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062129 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:12.062146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062189 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:12.062195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 05:52:12.062226 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:12.062231 | orchestrator | 2026-01-30 05:52:12.062236 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-01-30 05:52:12.062242 | orchestrator | Friday 30 January 2026 05:52:11 +0000 (0:00:01.273) 0:04:05.293 ******** 2026-01-30 05:52:12.062247 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:12.062253 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:12.062265 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966110 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966209 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966282 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966326 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966337 | orchestrator | 2026-01-30 05:52:27.966347 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-01-30 05:52:27.966357 | orchestrator | Friday 30 January 2026 05:52:13 +0000 (0:00:01.746) 0:04:07.040 ******** 2026-01-30 05:52:27.966366 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966375 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.966383 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966392 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966400 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966409 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966417 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966426 | orchestrator | 2026-01-30 05:52:27.966435 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-01-30 05:52:27.966444 | orchestrator | Friday 30 January 2026 05:52:15 +0000 (0:00:02.406) 0:04:09.446 ******** 2026-01-30 05:52:27.966452 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966461 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.966480 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966498 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966506 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966515 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966523 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966532 | orchestrator | 2026-01-30 05:52:27.966540 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-01-30 05:52:27.966573 | orchestrator | Friday 30 January 2026 05:52:17 +0000 (0:00:02.096) 0:04:11.542 ******** 2026-01-30 05:52:27.966582 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966591 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.966602 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966611 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966622 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966631 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966640 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966650 | orchestrator | 2026-01-30 05:52:27.966660 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-01-30 05:52:27.966671 | orchestrator | Friday 30 January 2026 05:52:19 +0000 (0:00:01.864) 0:04:13.407 ******** 2026-01-30 05:52:27.966681 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966691 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.966701 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966710 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966720 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966730 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966740 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966750 | orchestrator | 2026-01-30 05:52:27.966759 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-01-30 05:52:27.966767 | orchestrator | Friday 30 January 2026 05:52:21 +0000 (0:00:02.059) 0:04:15.467 ******** 2026-01-30 05:52:27.966814 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966823 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.966832 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.966850 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.966859 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.966867 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.966895 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.966912 | orchestrator | 2026-01-30 05:52:27.966927 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-01-30 05:52:27.966941 | orchestrator | Friday 30 January 2026 05:52:23 +0000 (0:00:01.828) 0:04:17.295 ******** 2026-01-30 05:52:27.966957 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.966999 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.967010 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.967018 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.967026 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:27.967049 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:27.967059 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:27.967067 | orchestrator | 2026-01-30 05:52:27.967076 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-01-30 05:52:27.967085 | orchestrator | Friday 30 January 2026 05:52:25 +0000 (0:00:02.146) 0:04:19.442 ******** 2026-01-30 05:52:27.967094 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967104 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:27.967115 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:27.967137 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:27.967147 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:27.967165 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:27.967174 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:27.967202 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967277 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:27.967288 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:27.967297 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:27.967305 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:27.967332 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:27.967341 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:27.967350 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967358 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:27.967367 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:27.967375 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:27.967384 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:27.967393 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:27.967401 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:27.967416 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967424 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:27.967433 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:27.967442 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:27.967450 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:27.967466 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:27.967475 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967484 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:27.967492 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:27.967501 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:27.967517 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.955450 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.955583 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.955603 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.955615 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.955625 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.955651 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.955713 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.955728 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.955742 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.955755 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.955767 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:30.955778 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.955904 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.955924 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:30.955936 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.955948 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.955984 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:30.955996 | orchestrator | 2026-01-30 05:52:30.956008 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-01-30 05:52:30.956021 | orchestrator | Friday 30 January 2026 05:52:27 +0000 (0:00:02.123) 0:04:21.565 ******** 2026-01-30 05:52:30.956032 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:30.956042 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:30.956053 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:30.956065 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:30.956075 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:30.956086 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:30.956097 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:30.956108 | orchestrator | 2026-01-30 05:52:30.956120 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-01-30 05:52:30.956131 | orchestrator | Friday 30 January 2026 05:52:30 +0000 (0:00:02.149) 0:04:23.715 ******** 2026-01-30 05:52:30.956143 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.956153 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.956165 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.956177 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.956210 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.956224 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.956236 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:30.956248 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.956260 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.956271 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.956284 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.956295 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.956307 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.956317 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:30.956325 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.956342 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.956349 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.956364 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.956372 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.956378 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:30.956384 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:30.956390 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.956397 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.956403 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:30.956409 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:30.956415 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:30.956421 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:30.956427 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:30.956440 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:59.349544 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:59.349674 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:59.349689 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:59.349698 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:59.349705 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.349712 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:59.349719 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.349744 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:59.349750 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:59.349757 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-01-30 05:52:59.349763 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-01-30 05:52:59.349769 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-01-30 05:52:59.349775 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:59.349792 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-01-30 05:52:59.349800 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:59.349869 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:59.349881 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-01-30 05:52:59.349890 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.349899 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-01-30 05:52:59.349908 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.349918 | orchestrator | 2026-01-30 05:52:59.349928 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-01-30 05:52:59.349938 | orchestrator | Friday 30 January 2026 05:52:32 +0000 (0:00:02.156) 0:04:25.872 ******** 2026-01-30 05:52:59.349947 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.349956 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.349966 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.349975 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.349984 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.349994 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350004 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350059 | orchestrator | 2026-01-30 05:52:59.350069 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-01-30 05:52:59.350075 | orchestrator | Friday 30 January 2026 05:52:34 +0000 (0:00:02.156) 0:04:28.029 ******** 2026-01-30 05:52:59.350081 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350088 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350095 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350101 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350109 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350115 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350122 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350129 | orchestrator | 2026-01-30 05:52:59.350136 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-01-30 05:52:59.350174 | orchestrator | Friday 30 January 2026 05:52:36 +0000 (0:00:01.990) 0:04:30.019 ******** 2026-01-30 05:52:59.350189 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350198 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350207 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350216 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350225 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350233 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350243 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350251 | orchestrator | 2026-01-30 05:52:59.350261 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-01-30 05:52:59.350269 | orchestrator | Friday 30 January 2026 05:52:38 +0000 (0:00:02.240) 0:04:32.259 ******** 2026-01-30 05:52:59.350279 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-01-30 05:52:59.350289 | orchestrator | 2026-01-30 05:52:59.350299 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-01-30 05:52:59.350308 | orchestrator | Friday 30 January 2026 05:52:41 +0000 (0:00:02.682) 0:04:34.942 ******** 2026-01-30 05:52:59.350317 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350328 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350338 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350349 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350358 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350367 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350377 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-01-30 05:52:59.350387 | orchestrator | 2026-01-30 05:52:59.350396 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-01-30 05:52:59.350406 | orchestrator | Friday 30 January 2026 05:52:43 +0000 (0:00:01.943) 0:04:36.885 ******** 2026-01-30 05:52:59.350416 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350426 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350436 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350446 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350456 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350464 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350470 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350475 | orchestrator | 2026-01-30 05:52:59.350481 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-01-30 05:52:59.350487 | orchestrator | Friday 30 January 2026 05:52:45 +0000 (0:00:02.030) 0:04:38.915 ******** 2026-01-30 05:52:59.350499 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350505 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350510 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350522 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350527 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350533 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350539 | orchestrator | 2026-01-30 05:52:59.350545 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-01-30 05:52:59.350550 | orchestrator | Friday 30 January 2026 05:52:47 +0000 (0:00:01.916) 0:04:40.831 ******** 2026-01-30 05:52:59.350558 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:52:59.350568 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:52:59.350577 | orchestrator | ok: [testbed-node-2] 2026-01-30 05:52:59.350586 | orchestrator | ok: [testbed-node-3] 2026-01-30 05:52:59.350612 | orchestrator | ok: [testbed-node-4] 2026-01-30 05:52:59.350622 | orchestrator | ok: [testbed-node-5] 2026-01-30 05:52:59.350632 | orchestrator | ok: [testbed-manager] 2026-01-30 05:52:59.350641 | orchestrator | 2026-01-30 05:52:59.350651 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-01-30 05:52:59.350657 | orchestrator | Friday 30 January 2026 05:52:49 +0000 (0:00:02.468) 0:04:43.300 ******** 2026-01-30 05:52:59.350663 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350668 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350678 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350683 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350689 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350695 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350701 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350706 | orchestrator | 2026-01-30 05:52:59.350712 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-01-30 05:52:59.350718 | orchestrator | Friday 30 January 2026 05:52:52 +0000 (0:00:02.390) 0:04:45.691 ******** 2026-01-30 05:52:59.350723 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350729 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:52:59.350734 | orchestrator | skipping: [testbed-node-2] 2026-01-30 05:52:59.350740 | orchestrator | skipping: [testbed-node-3] 2026-01-30 05:52:59.350746 | orchestrator | skipping: [testbed-node-4] 2026-01-30 05:52:59.350751 | orchestrator | skipping: [testbed-node-5] 2026-01-30 05:52:59.350757 | orchestrator | skipping: [testbed-manager] 2026-01-30 05:52:59.350763 | orchestrator | 2026-01-30 05:52:59.350768 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-01-30 05:52:59.350774 | orchestrator | Friday 30 January 2026 05:52:54 +0000 (0:00:02.426) 0:04:48.117 ******** 2026-01-30 05:52:59.350780 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:52:59.350785 | orchestrator | 2026-01-30 05:52:59.350791 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-01-30 05:52:59.350797 | orchestrator | Friday 30 January 2026 05:52:57 +0000 (0:00:02.773) 0:04:50.890 ******** 2026-01-30 05:52:59.350803 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:52:59.350883 | orchestrator | 2026-01-30 05:52:59.350898 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-01-30 05:53:39.224034 | orchestrator | 2026-01-30 05:53:39.224165 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 05:53:39.224187 | orchestrator | Friday 30 January 2026 05:52:59 +0000 (0:00:02.059) 0:04:52.950 ******** 2026-01-30 05:53:39.224201 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224223 | orchestrator | 2026-01-30 05:53:39.224237 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 05:53:39.224249 | orchestrator | Friday 30 January 2026 05:53:00 +0000 (0:00:01.447) 0:04:54.397 ******** 2026-01-30 05:53:39.224261 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224273 | orchestrator | 2026-01-30 05:53:39.224285 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-01-30 05:53:39.224298 | orchestrator | Friday 30 January 2026 05:53:01 +0000 (0:00:01.103) 0:04:55.501 ******** 2026-01-30 05:53:39.224313 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-30 05:53:39.224328 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-30 05:53:39.224372 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-30 05:53:39.224388 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-30 05:53:39.224415 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-30 05:53:39.224425 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}])  2026-01-30 05:53:39.224435 | orchestrator | 2026-01-30 05:53:39.224443 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-01-30 05:53:39.224451 | orchestrator | 2026-01-30 05:53:39.224459 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-01-30 05:53:39.224470 | orchestrator | Friday 30 January 2026 05:53:12 +0000 (0:00:10.875) 0:05:06.376 ******** 2026-01-30 05:53:39.224483 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224495 | orchestrator | 2026-01-30 05:53:39.224507 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-01-30 05:53:39.224519 | orchestrator | Friday 30 January 2026 05:53:14 +0000 (0:00:01.486) 0:05:07.863 ******** 2026-01-30 05:53:39.224531 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224543 | orchestrator | 2026-01-30 05:53:39.224557 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-01-30 05:53:39.224571 | orchestrator | Friday 30 January 2026 05:53:15 +0000 (0:00:01.160) 0:05:09.024 ******** 2026-01-30 05:53:39.224586 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:39.224601 | orchestrator | 2026-01-30 05:53:39.224611 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-01-30 05:53:39.224624 | orchestrator | Friday 30 January 2026 05:53:16 +0000 (0:00:01.135) 0:05:10.159 ******** 2026-01-30 05:53:39.224638 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224651 | orchestrator | 2026-01-30 05:53:39.224664 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 05:53:39.224679 | orchestrator | Friday 30 January 2026 05:53:17 +0000 (0:00:01.208) 0:05:11.368 ******** 2026-01-30 05:53:39.224692 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-01-30 05:53:39.224705 | orchestrator | 2026-01-30 05:53:39.224719 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 05:53:39.224757 | orchestrator | Friday 30 January 2026 05:53:18 +0000 (0:00:01.124) 0:05:12.493 ******** 2026-01-30 05:53:39.224772 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224785 | orchestrator | 2026-01-30 05:53:39.224799 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 05:53:39.224809 | orchestrator | Friday 30 January 2026 05:53:20 +0000 (0:00:01.501) 0:05:13.994 ******** 2026-01-30 05:53:39.224849 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224866 | orchestrator | 2026-01-30 05:53:39.224893 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 05:53:39.224908 | orchestrator | Friday 30 January 2026 05:53:21 +0000 (0:00:01.164) 0:05:15.159 ******** 2026-01-30 05:53:39.224922 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224937 | orchestrator | 2026-01-30 05:53:39.224952 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 05:53:39.224967 | orchestrator | Friday 30 January 2026 05:53:22 +0000 (0:00:01.450) 0:05:16.609 ******** 2026-01-30 05:53:39.224981 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.224995 | orchestrator | 2026-01-30 05:53:39.225009 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 05:53:39.225023 | orchestrator | Friday 30 January 2026 05:53:24 +0000 (0:00:01.190) 0:05:17.800 ******** 2026-01-30 05:53:39.225036 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.225050 | orchestrator | 2026-01-30 05:53:39.225065 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 05:53:39.225079 | orchestrator | Friday 30 January 2026 05:53:25 +0000 (0:00:01.101) 0:05:18.902 ******** 2026-01-30 05:53:39.225092 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.225106 | orchestrator | 2026-01-30 05:53:39.225120 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 05:53:39.225135 | orchestrator | Friday 30 January 2026 05:53:26 +0000 (0:00:01.169) 0:05:20.072 ******** 2026-01-30 05:53:39.225146 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:39.225154 | orchestrator | 2026-01-30 05:53:39.225162 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 05:53:39.225170 | orchestrator | Friday 30 January 2026 05:53:27 +0000 (0:00:01.126) 0:05:21.198 ******** 2026-01-30 05:53:39.225178 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.225185 | orchestrator | 2026-01-30 05:53:39.225193 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 05:53:39.225201 | orchestrator | Friday 30 January 2026 05:53:28 +0000 (0:00:01.145) 0:05:22.343 ******** 2026-01-30 05:53:39.225214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:53:39.225226 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:53:39.225239 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:53:39.225248 | orchestrator | 2026-01-30 05:53:39.225256 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 05:53:39.225263 | orchestrator | Friday 30 January 2026 05:53:30 +0000 (0:00:01.622) 0:05:23.966 ******** 2026-01-30 05:53:39.225271 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:39.225279 | orchestrator | 2026-01-30 05:53:39.225295 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 05:53:39.225303 | orchestrator | Friday 30 January 2026 05:53:31 +0000 (0:00:01.208) 0:05:25.174 ******** 2026-01-30 05:53:39.225311 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:53:39.225319 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:53:39.225326 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:53:39.225334 | orchestrator | 2026-01-30 05:53:39.225342 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 05:53:39.225350 | orchestrator | Friday 30 January 2026 05:53:34 +0000 (0:00:03.206) 0:05:28.381 ******** 2026-01-30 05:53:39.225357 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:53:39.225365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:53:39.225373 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:53:39.225381 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:39.225394 | orchestrator | 2026-01-30 05:53:39.225406 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 05:53:39.225420 | orchestrator | Friday 30 January 2026 05:53:36 +0000 (0:00:01.417) 0:05:29.798 ******** 2026-01-30 05:53:39.225445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 05:53:39.225461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 05:53:39.225474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 05:53:39.225486 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:39.225499 | orchestrator | 2026-01-30 05:53:39.225511 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 05:53:39.225524 | orchestrator | Friday 30 January 2026 05:53:38 +0000 (0:00:01.898) 0:05:31.697 ******** 2026-01-30 05:53:39.225552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:53:59.635755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:53:59.635957 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:53:59.635990 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636004 | orchestrator | 2026-01-30 05:53:59.636016 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 05:53:59.636029 | orchestrator | Friday 30 January 2026 05:53:39 +0000 (0:00:01.126) 0:05:32.823 ******** 2026-01-30 05:53:59.636043 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9b4b4ef35663', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 05:53:32.104108', 'end': '2026-01-30 05:53:32.166688', 'delta': '0:00:00.062580', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9b4b4ef35663'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 05:53:59.636077 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b97e426bfe4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 05:53:32.719450', 'end': '2026-01-30 05:53:32.772311', 'delta': '0:00:00.052861', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b97e426bfe4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:53:59.636112 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1f4acb9ff46e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 05:53:33.585261', 'end': '2026-01-30 05:53:33.636883', 'delta': '0:00:00.051622', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f4acb9ff46e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:53:59.636126 | orchestrator | 2026-01-30 05:53:59.636138 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 05:53:59.636149 | orchestrator | Friday 30 January 2026 05:53:40 +0000 (0:00:01.178) 0:05:34.002 ******** 2026-01-30 05:53:59.636161 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:59.636173 | orchestrator | 2026-01-30 05:53:59.636184 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 05:53:59.636196 | orchestrator | Friday 30 January 2026 05:53:41 +0000 (0:00:01.232) 0:05:35.234 ******** 2026-01-30 05:53:59.636207 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636218 | orchestrator | 2026-01-30 05:53:59.636229 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 05:53:59.636240 | orchestrator | Friday 30 January 2026 05:53:42 +0000 (0:00:01.236) 0:05:36.471 ******** 2026-01-30 05:53:59.636251 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:59.636263 | orchestrator | 2026-01-30 05:53:59.636284 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 05:53:59.636304 | orchestrator | Friday 30 January 2026 05:53:43 +0000 (0:00:01.130) 0:05:37.601 ******** 2026-01-30 05:53:59.636349 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-01-30 05:53:59.636370 | orchestrator | 2026-01-30 05:53:59.636391 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:53:59.636412 | orchestrator | Friday 30 January 2026 05:53:46 +0000 (0:00:02.647) 0:05:40.248 ******** 2026-01-30 05:53:59.636433 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:53:59.636454 | orchestrator | 2026-01-30 05:53:59.636475 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 05:53:59.636497 | orchestrator | Friday 30 January 2026 05:53:47 +0000 (0:00:01.155) 0:05:41.404 ******** 2026-01-30 05:53:59.636518 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636538 | orchestrator | 2026-01-30 05:53:59.636556 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 05:53:59.636576 | orchestrator | Friday 30 January 2026 05:53:48 +0000 (0:00:01.098) 0:05:42.502 ******** 2026-01-30 05:53:59.636596 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636616 | orchestrator | 2026-01-30 05:53:59.636636 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:53:59.636655 | orchestrator | Friday 30 January 2026 05:53:50 +0000 (0:00:01.188) 0:05:43.691 ******** 2026-01-30 05:53:59.636674 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636692 | orchestrator | 2026-01-30 05:53:59.636711 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 05:53:59.636729 | orchestrator | Friday 30 January 2026 05:53:51 +0000 (0:00:01.112) 0:05:44.804 ******** 2026-01-30 05:53:59.636747 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636765 | orchestrator | 2026-01-30 05:53:59.636784 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 05:53:59.636817 | orchestrator | Friday 30 January 2026 05:53:52 +0000 (0:00:01.100) 0:05:45.905 ******** 2026-01-30 05:53:59.636867 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636886 | orchestrator | 2026-01-30 05:53:59.636904 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 05:53:59.636922 | orchestrator | Friday 30 January 2026 05:53:53 +0000 (0:00:01.126) 0:05:47.031 ******** 2026-01-30 05:53:59.636941 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.636960 | orchestrator | 2026-01-30 05:53:59.636979 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 05:53:59.636998 | orchestrator | Friday 30 January 2026 05:53:54 +0000 (0:00:01.580) 0:05:48.612 ******** 2026-01-30 05:53:59.637015 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.637032 | orchestrator | 2026-01-30 05:53:59.637043 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 05:53:59.637054 | orchestrator | Friday 30 January 2026 05:53:56 +0000 (0:00:01.113) 0:05:49.726 ******** 2026-01-30 05:53:59.637065 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.637076 | orchestrator | 2026-01-30 05:53:59.637097 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 05:53:59.637109 | orchestrator | Friday 30 January 2026 05:53:57 +0000 (0:00:01.131) 0:05:50.857 ******** 2026-01-30 05:53:59.637120 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:53:59.637131 | orchestrator | 2026-01-30 05:53:59.637141 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 05:53:59.637152 | orchestrator | Friday 30 January 2026 05:53:58 +0000 (0:00:01.157) 0:05:52.014 ******** 2026-01-30 05:53:59.637164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:53:59.637176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:53:59.637187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:53:59.637200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:53:59.637227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:54:00.824750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:54:00.824904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:54:00.824960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:54:00.824984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:54:00.825040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:54:00.825058 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:00.825104 | orchestrator | 2026-01-30 05:54:00.825119 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 05:54:00.825134 | orchestrator | Friday 30 January 2026 05:53:59 +0000 (0:00:01.215) 0:05:53.230 ******** 2026-01-30 05:54:00.825172 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825209 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825224 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825238 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825252 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:00.825289 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:24.570157 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:24.570251 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:24.570263 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:54:24.570289 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570298 | orchestrator | 2026-01-30 05:54:24.570306 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 05:54:24.570313 | orchestrator | Friday 30 January 2026 05:54:00 +0000 (0:00:01.191) 0:05:54.422 ******** 2026-01-30 05:54:24.570320 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:54:24.570327 | orchestrator | 2026-01-30 05:54:24.570334 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 05:54:24.570340 | orchestrator | Friday 30 January 2026 05:54:02 +0000 (0:00:01.491) 0:05:55.913 ******** 2026-01-30 05:54:24.570346 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:54:24.570352 | orchestrator | 2026-01-30 05:54:24.570358 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:54:24.570376 | orchestrator | Friday 30 January 2026 05:54:03 +0000 (0:00:01.091) 0:05:57.005 ******** 2026-01-30 05:54:24.570382 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:54:24.570388 | orchestrator | 2026-01-30 05:54:24.570395 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:54:24.570401 | orchestrator | Friday 30 January 2026 05:54:04 +0000 (0:00:01.483) 0:05:58.488 ******** 2026-01-30 05:54:24.570407 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570413 | orchestrator | 2026-01-30 05:54:24.570419 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:54:24.570426 | orchestrator | Friday 30 January 2026 05:54:05 +0000 (0:00:01.103) 0:05:59.592 ******** 2026-01-30 05:54:24.570432 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570438 | orchestrator | 2026-01-30 05:54:24.570444 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:54:24.570450 | orchestrator | Friday 30 January 2026 05:54:07 +0000 (0:00:01.215) 0:06:00.807 ******** 2026-01-30 05:54:24.570456 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570463 | orchestrator | 2026-01-30 05:54:24.570469 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 05:54:24.570475 | orchestrator | Friday 30 January 2026 05:54:08 +0000 (0:00:01.136) 0:06:01.943 ******** 2026-01-30 05:54:24.570481 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:54:24.570487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 05:54:24.570494 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 05:54:24.570500 | orchestrator | 2026-01-30 05:54:24.570506 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 05:54:24.570516 | orchestrator | Friday 30 January 2026 05:54:10 +0000 (0:00:01.929) 0:06:03.873 ******** 2026-01-30 05:54:24.570523 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:54:24.570530 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:54:24.570536 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:54:24.570542 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570548 | orchestrator | 2026-01-30 05:54:24.570554 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 05:54:24.570560 | orchestrator | Friday 30 January 2026 05:54:11 +0000 (0:00:01.156) 0:06:05.030 ******** 2026-01-30 05:54:24.570566 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570572 | orchestrator | 2026-01-30 05:54:24.570579 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 05:54:24.570585 | orchestrator | Friday 30 January 2026 05:54:12 +0000 (0:00:01.115) 0:06:06.146 ******** 2026-01-30 05:54:24.570591 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:54:24.570602 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:54:24.570609 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:54:24.570615 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:54:24.570621 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:54:24.570627 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:54:24.570634 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:54:24.570640 | orchestrator | 2026-01-30 05:54:24.570646 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 05:54:24.570652 | orchestrator | Friday 30 January 2026 05:54:14 +0000 (0:00:02.030) 0:06:08.176 ******** 2026-01-30 05:54:24.570659 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:54:24.570667 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:54:24.570674 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:54:24.570681 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:54:24.570688 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:54:24.570695 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:54:24.570702 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:54:24.570709 | orchestrator | 2026-01-30 05:54:24.570716 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-01-30 05:54:24.570723 | orchestrator | Friday 30 January 2026 05:54:17 +0000 (0:00:02.810) 0:06:10.987 ******** 2026-01-30 05:54:24.570730 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-01-30 05:54:24.570737 | orchestrator | 2026-01-30 05:54:24.570744 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-01-30 05:54:24.570751 | orchestrator | Friday 30 January 2026 05:54:19 +0000 (0:00:02.361) 0:06:13.348 ******** 2026-01-30 05:54:24.570758 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570765 | orchestrator | 2026-01-30 05:54:24.570773 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-01-30 05:54:24.570780 | orchestrator | Friday 30 January 2026 05:54:21 +0000 (0:00:01.283) 0:06:14.632 ******** 2026-01-30 05:54:24.570787 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:54:24.570794 | orchestrator | 2026-01-30 05:54:24.570801 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-01-30 05:54:24.570808 | orchestrator | Friday 30 January 2026 05:54:22 +0000 (0:00:01.098) 0:06:15.731 ******** 2026-01-30 05:54:24.570815 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-01-30 05:54:24.570822 | orchestrator | 2026-01-30 05:54:24.570878 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-01-30 05:54:24.570899 | orchestrator | Friday 30 January 2026 05:54:24 +0000 (0:00:02.435) 0:06:18.166 ******** 2026-01-30 05:55:26.868999 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869121 | orchestrator | 2026-01-30 05:55:26.869140 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-01-30 05:55:26.869150 | orchestrator | Friday 30 January 2026 05:54:25 +0000 (0:00:01.122) 0:06:19.289 ******** 2026-01-30 05:55:26.869159 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:55:26.869166 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 05:55:26.869174 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:55:26.869181 | orchestrator | 2026-01-30 05:55:26.869189 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-01-30 05:55:26.869220 | orchestrator | Friday 30 January 2026 05:54:28 +0000 (0:00:02.557) 0:06:21.847 ******** 2026-01-30 05:55:26.869227 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-01-30 05:55:26.869234 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-01-30 05:55:26.869243 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-01-30 05:55:26.869249 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-01-30 05:55:26.869268 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-01-30 05:55:26.869275 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-01-30 05:55:26.869282 | orchestrator | 2026-01-30 05:55:26.869289 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-01-30 05:55:26.869295 | orchestrator | Friday 30 January 2026 05:54:42 +0000 (0:00:13.959) 0:06:35.806 ******** 2026-01-30 05:55:26.869302 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:55:26.869309 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:55:26.869316 | orchestrator | 2026-01-30 05:55:26.869322 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-01-30 05:55:26.869329 | orchestrator | Friday 30 January 2026 05:54:46 +0000 (0:00:04.285) 0:06:40.091 ******** 2026-01-30 05:55:26.869336 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:55:26.869342 | orchestrator | 2026-01-30 05:55:26.869349 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 05:55:26.869355 | orchestrator | Friday 30 January 2026 05:54:49 +0000 (0:00:02.672) 0:06:42.764 ******** 2026-01-30 05:55:26.869362 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-01-30 05:55:26.869369 | orchestrator | 2026-01-30 05:55:26.869376 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 05:55:26.869383 | orchestrator | Friday 30 January 2026 05:54:50 +0000 (0:00:01.461) 0:06:44.226 ******** 2026-01-30 05:55:26.869390 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-01-30 05:55:26.869396 | orchestrator | 2026-01-30 05:55:26.869403 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 05:55:26.869410 | orchestrator | Friday 30 January 2026 05:54:52 +0000 (0:00:01.636) 0:06:45.862 ******** 2026-01-30 05:55:26.869416 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.869423 | orchestrator | 2026-01-30 05:55:26.869430 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 05:55:26.869436 | orchestrator | Friday 30 January 2026 05:54:53 +0000 (0:00:01.575) 0:06:47.438 ******** 2026-01-30 05:55:26.869444 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869452 | orchestrator | 2026-01-30 05:55:26.869459 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 05:55:26.869467 | orchestrator | Friday 30 January 2026 05:54:54 +0000 (0:00:01.124) 0:06:48.562 ******** 2026-01-30 05:55:26.869475 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869482 | orchestrator | 2026-01-30 05:55:26.869490 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 05:55:26.869497 | orchestrator | Friday 30 January 2026 05:54:56 +0000 (0:00:01.159) 0:06:49.722 ******** 2026-01-30 05:55:26.869505 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869513 | orchestrator | 2026-01-30 05:55:26.869520 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 05:55:26.869528 | orchestrator | Friday 30 January 2026 05:54:57 +0000 (0:00:01.105) 0:06:50.828 ******** 2026-01-30 05:55:26.869536 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.869543 | orchestrator | 2026-01-30 05:55:26.869550 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 05:55:26.869564 | orchestrator | Friday 30 January 2026 05:54:58 +0000 (0:00:01.600) 0:06:52.428 ******** 2026-01-30 05:55:26.869572 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869580 | orchestrator | 2026-01-30 05:55:26.869587 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 05:55:26.869595 | orchestrator | Friday 30 January 2026 05:54:59 +0000 (0:00:01.121) 0:06:53.550 ******** 2026-01-30 05:55:26.869601 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869608 | orchestrator | 2026-01-30 05:55:26.869614 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 05:55:26.869621 | orchestrator | Friday 30 January 2026 05:55:01 +0000 (0:00:01.126) 0:06:54.676 ******** 2026-01-30 05:55:26.869628 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.869634 | orchestrator | 2026-01-30 05:55:26.869641 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 05:55:26.869647 | orchestrator | Friday 30 January 2026 05:55:02 +0000 (0:00:01.590) 0:06:56.267 ******** 2026-01-30 05:55:26.869656 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.869667 | orchestrator | 2026-01-30 05:55:26.869696 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 05:55:26.869709 | orchestrator | Friday 30 January 2026 05:55:04 +0000 (0:00:01.565) 0:06:57.832 ******** 2026-01-30 05:55:26.869719 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869729 | orchestrator | 2026-01-30 05:55:26.869740 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 05:55:26.869751 | orchestrator | Friday 30 January 2026 05:55:05 +0000 (0:00:01.137) 0:06:58.970 ******** 2026-01-30 05:55:26.869762 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.869773 | orchestrator | 2026-01-30 05:55:26.869783 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 05:55:26.869794 | orchestrator | Friday 30 January 2026 05:55:06 +0000 (0:00:01.153) 0:07:00.123 ******** 2026-01-30 05:55:26.869804 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869814 | orchestrator | 2026-01-30 05:55:26.869825 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 05:55:26.869835 | orchestrator | Friday 30 January 2026 05:55:07 +0000 (0:00:01.158) 0:07:01.281 ******** 2026-01-30 05:55:26.869932 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869948 | orchestrator | 2026-01-30 05:55:26.869960 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 05:55:26.869973 | orchestrator | Friday 30 January 2026 05:55:08 +0000 (0:00:01.190) 0:07:02.471 ******** 2026-01-30 05:55:26.869985 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.869997 | orchestrator | 2026-01-30 05:55:26.870009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 05:55:26.870100 | orchestrator | Friday 30 January 2026 05:55:09 +0000 (0:00:01.126) 0:07:03.598 ******** 2026-01-30 05:55:26.870113 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870125 | orchestrator | 2026-01-30 05:55:26.870137 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 05:55:26.870149 | orchestrator | Friday 30 January 2026 05:55:11 +0000 (0:00:01.153) 0:07:04.751 ******** 2026-01-30 05:55:26.870161 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870173 | orchestrator | 2026-01-30 05:55:26.870185 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 05:55:26.870198 | orchestrator | Friday 30 January 2026 05:55:12 +0000 (0:00:01.104) 0:07:05.856 ******** 2026-01-30 05:55:26.870210 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.870223 | orchestrator | 2026-01-30 05:55:26.870235 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 05:55:26.870248 | orchestrator | Friday 30 January 2026 05:55:13 +0000 (0:00:01.145) 0:07:07.001 ******** 2026-01-30 05:55:26.870260 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.870273 | orchestrator | 2026-01-30 05:55:26.870285 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 05:55:26.870308 | orchestrator | Friday 30 January 2026 05:55:14 +0000 (0:00:01.141) 0:07:08.143 ******** 2026-01-30 05:55:26.870319 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:55:26.870330 | orchestrator | 2026-01-30 05:55:26.870339 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 05:55:26.870350 | orchestrator | Friday 30 January 2026 05:55:15 +0000 (0:00:01.152) 0:07:09.295 ******** 2026-01-30 05:55:26.870361 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870372 | orchestrator | 2026-01-30 05:55:26.870384 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 05:55:26.870396 | orchestrator | Friday 30 January 2026 05:55:16 +0000 (0:00:01.095) 0:07:10.390 ******** 2026-01-30 05:55:26.870409 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870421 | orchestrator | 2026-01-30 05:55:26.870433 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 05:55:26.870445 | orchestrator | Friday 30 January 2026 05:55:17 +0000 (0:00:01.115) 0:07:11.506 ******** 2026-01-30 05:55:26.870458 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870470 | orchestrator | 2026-01-30 05:55:26.870483 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 05:55:26.870495 | orchestrator | Friday 30 January 2026 05:55:19 +0000 (0:00:01.196) 0:07:12.703 ******** 2026-01-30 05:55:26.870507 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870519 | orchestrator | 2026-01-30 05:55:26.870531 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 05:55:26.870544 | orchestrator | Friday 30 January 2026 05:55:20 +0000 (0:00:01.097) 0:07:13.800 ******** 2026-01-30 05:55:26.870555 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870566 | orchestrator | 2026-01-30 05:55:26.870576 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 05:55:26.870587 | orchestrator | Friday 30 January 2026 05:55:21 +0000 (0:00:01.092) 0:07:14.892 ******** 2026-01-30 05:55:26.870597 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870608 | orchestrator | 2026-01-30 05:55:26.870620 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 05:55:26.870633 | orchestrator | Friday 30 January 2026 05:55:22 +0000 (0:00:01.129) 0:07:16.022 ******** 2026-01-30 05:55:26.870645 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870658 | orchestrator | 2026-01-30 05:55:26.870670 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 05:55:26.870683 | orchestrator | Friday 30 January 2026 05:55:23 +0000 (0:00:01.115) 0:07:17.137 ******** 2026-01-30 05:55:26.870695 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870705 | orchestrator | 2026-01-30 05:55:26.870717 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 05:55:26.870729 | orchestrator | Friday 30 January 2026 05:55:24 +0000 (0:00:01.124) 0:07:18.261 ******** 2026-01-30 05:55:26.870741 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870753 | orchestrator | 2026-01-30 05:55:26.870766 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 05:55:26.870778 | orchestrator | Friday 30 January 2026 05:55:25 +0000 (0:00:01.100) 0:07:19.361 ******** 2026-01-30 05:55:26.870790 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:55:26.870803 | orchestrator | 2026-01-30 05:55:26.870814 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 05:55:26.870826 | orchestrator | Friday 30 January 2026 05:55:26 +0000 (0:00:01.101) 0:07:20.463 ******** 2026-01-30 05:56:17.409825 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410091 | orchestrator | 2026-01-30 05:56:17.410133 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 05:56:17.410156 | orchestrator | Friday 30 January 2026 05:55:28 +0000 (0:00:01.151) 0:07:21.614 ******** 2026-01-30 05:56:17.410169 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410181 | orchestrator | 2026-01-30 05:56:17.410220 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 05:56:17.410232 | orchestrator | Friday 30 January 2026 05:55:29 +0000 (0:00:01.112) 0:07:22.727 ******** 2026-01-30 05:56:17.410243 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.410254 | orchestrator | 2026-01-30 05:56:17.410265 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 05:56:17.410276 | orchestrator | Friday 30 January 2026 05:55:31 +0000 (0:00:02.082) 0:07:24.809 ******** 2026-01-30 05:56:17.410286 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.410297 | orchestrator | 2026-01-30 05:56:17.410308 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 05:56:17.410318 | orchestrator | Friday 30 January 2026 05:55:33 +0000 (0:00:02.606) 0:07:27.415 ******** 2026-01-30 05:56:17.410329 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-01-30 05:56:17.410341 | orchestrator | 2026-01-30 05:56:17.410352 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 05:56:17.410366 | orchestrator | Friday 30 January 2026 05:55:35 +0000 (0:00:01.409) 0:07:28.825 ******** 2026-01-30 05:56:17.410395 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410408 | orchestrator | 2026-01-30 05:56:17.410420 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 05:56:17.410433 | orchestrator | Friday 30 January 2026 05:55:36 +0000 (0:00:01.086) 0:07:29.911 ******** 2026-01-30 05:56:17.410445 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410457 | orchestrator | 2026-01-30 05:56:17.410470 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 05:56:17.410480 | orchestrator | Friday 30 January 2026 05:55:37 +0000 (0:00:01.094) 0:07:31.006 ******** 2026-01-30 05:56:17.410491 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 05:56:17.410502 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 05:56:17.410513 | orchestrator | 2026-01-30 05:56:17.410524 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 05:56:17.410535 | orchestrator | Friday 30 January 2026 05:55:39 +0000 (0:00:01.723) 0:07:32.730 ******** 2026-01-30 05:56:17.410545 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.410564 | orchestrator | 2026-01-30 05:56:17.410583 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 05:56:17.410600 | orchestrator | Friday 30 January 2026 05:55:40 +0000 (0:00:01.620) 0:07:34.350 ******** 2026-01-30 05:56:17.410618 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410637 | orchestrator | 2026-01-30 05:56:17.410656 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 05:56:17.410676 | orchestrator | Friday 30 January 2026 05:55:41 +0000 (0:00:01.103) 0:07:35.454 ******** 2026-01-30 05:56:17.410697 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410717 | orchestrator | 2026-01-30 05:56:17.410738 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 05:56:17.410755 | orchestrator | Friday 30 January 2026 05:55:42 +0000 (0:00:01.086) 0:07:36.541 ******** 2026-01-30 05:56:17.410767 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410777 | orchestrator | 2026-01-30 05:56:17.410788 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 05:56:17.410798 | orchestrator | Friday 30 January 2026 05:55:44 +0000 (0:00:01.083) 0:07:37.625 ******** 2026-01-30 05:56:17.410809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-01-30 05:56:17.410820 | orchestrator | 2026-01-30 05:56:17.410831 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 05:56:17.410841 | orchestrator | Friday 30 January 2026 05:55:45 +0000 (0:00:01.238) 0:07:38.863 ******** 2026-01-30 05:56:17.410852 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.410888 | orchestrator | 2026-01-30 05:56:17.410901 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 05:56:17.410923 | orchestrator | Friday 30 January 2026 05:55:46 +0000 (0:00:01.669) 0:07:40.533 ******** 2026-01-30 05:56:17.410934 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 05:56:17.410945 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 05:56:17.410956 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 05:56:17.410966 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.410977 | orchestrator | 2026-01-30 05:56:17.410987 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 05:56:17.410998 | orchestrator | Friday 30 January 2026 05:55:48 +0000 (0:00:01.098) 0:07:41.632 ******** 2026-01-30 05:56:17.411008 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411019 | orchestrator | 2026-01-30 05:56:17.411029 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 05:56:17.411040 | orchestrator | Friday 30 January 2026 05:55:49 +0000 (0:00:00.991) 0:07:42.623 ******** 2026-01-30 05:56:17.411051 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411061 | orchestrator | 2026-01-30 05:56:17.411072 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 05:56:17.411083 | orchestrator | Friday 30 January 2026 05:55:49 +0000 (0:00:00.941) 0:07:43.565 ******** 2026-01-30 05:56:17.411094 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411105 | orchestrator | 2026-01-30 05:56:17.411115 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 05:56:17.411147 | orchestrator | Friday 30 January 2026 05:55:51 +0000 (0:00:01.095) 0:07:44.661 ******** 2026-01-30 05:56:17.411159 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411169 | orchestrator | 2026-01-30 05:56:17.411180 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 05:56:17.411199 | orchestrator | Friday 30 January 2026 05:55:52 +0000 (0:00:01.078) 0:07:45.739 ******** 2026-01-30 05:56:17.411216 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411234 | orchestrator | 2026-01-30 05:56:17.411251 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 05:56:17.411266 | orchestrator | Friday 30 January 2026 05:55:53 +0000 (0:00:01.087) 0:07:46.826 ******** 2026-01-30 05:56:17.411284 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.411301 | orchestrator | 2026-01-30 05:56:17.411318 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 05:56:17.411335 | orchestrator | Friday 30 January 2026 05:55:55 +0000 (0:00:02.629) 0:07:49.456 ******** 2026-01-30 05:56:17.411353 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.411370 | orchestrator | 2026-01-30 05:56:17.411387 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 05:56:17.411403 | orchestrator | Friday 30 January 2026 05:55:57 +0000 (0:00:01.164) 0:07:50.620 ******** 2026-01-30 05:56:17.411419 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-01-30 05:56:17.411435 | orchestrator | 2026-01-30 05:56:17.411453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 05:56:17.411482 | orchestrator | Friday 30 January 2026 05:55:58 +0000 (0:00:01.468) 0:07:52.089 ******** 2026-01-30 05:56:17.411500 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411519 | orchestrator | 2026-01-30 05:56:17.411537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 05:56:17.411554 | orchestrator | Friday 30 January 2026 05:55:59 +0000 (0:00:01.131) 0:07:53.220 ******** 2026-01-30 05:56:17.411571 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411589 | orchestrator | 2026-01-30 05:56:17.411607 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 05:56:17.411625 | orchestrator | Friday 30 January 2026 05:56:00 +0000 (0:00:01.114) 0:07:54.335 ******** 2026-01-30 05:56:17.411642 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411673 | orchestrator | 2026-01-30 05:56:17.411690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 05:56:17.411707 | orchestrator | Friday 30 January 2026 05:56:01 +0000 (0:00:01.131) 0:07:55.467 ******** 2026-01-30 05:56:17.411724 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411742 | orchestrator | 2026-01-30 05:56:17.411758 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 05:56:17.411776 | orchestrator | Friday 30 January 2026 05:56:02 +0000 (0:00:01.128) 0:07:56.595 ******** 2026-01-30 05:56:17.411793 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411810 | orchestrator | 2026-01-30 05:56:17.411828 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 05:56:17.411844 | orchestrator | Friday 30 January 2026 05:56:04 +0000 (0:00:01.116) 0:07:57.712 ******** 2026-01-30 05:56:17.411889 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411910 | orchestrator | 2026-01-30 05:56:17.411928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 05:56:17.411946 | orchestrator | Friday 30 January 2026 05:56:05 +0000 (0:00:01.142) 0:07:58.854 ******** 2026-01-30 05:56:17.411962 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.411979 | orchestrator | 2026-01-30 05:56:17.411996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 05:56:17.412013 | orchestrator | Friday 30 January 2026 05:56:06 +0000 (0:00:01.135) 0:07:59.990 ******** 2026-01-30 05:56:17.412030 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:56:17.412048 | orchestrator | 2026-01-30 05:56:17.412065 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 05:56:17.412084 | orchestrator | Friday 30 January 2026 05:56:07 +0000 (0:00:01.119) 0:08:01.110 ******** 2026-01-30 05:56:17.412101 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:56:17.412118 | orchestrator | 2026-01-30 05:56:17.412135 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 05:56:17.412152 | orchestrator | Friday 30 January 2026 05:56:08 +0000 (0:00:01.158) 0:08:02.269 ******** 2026-01-30 05:56:17.412169 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-01-30 05:56:17.412186 | orchestrator | 2026-01-30 05:56:17.412203 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 05:56:17.412222 | orchestrator | Friday 30 January 2026 05:56:10 +0000 (0:00:01.459) 0:08:03.728 ******** 2026-01-30 05:56:17.412240 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-01-30 05:56:17.412259 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-30 05:56:17.412279 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-30 05:56:17.412299 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-30 05:56:17.412316 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-30 05:56:17.412333 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-30 05:56:17.412352 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-30 05:56:17.412371 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-30 05:56:17.412391 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 05:56:17.412410 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 05:56:17.412429 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 05:56:17.412449 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 05:56:17.412467 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 05:56:17.412487 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 05:56:17.412525 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-01-30 05:57:04.559702 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-01-30 05:57:04.559796 | orchestrator | 2026-01-30 05:57:04.559811 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 05:57:04.559856 | orchestrator | Friday 30 January 2026 05:56:17 +0000 (0:00:07.254) 0:08:10.983 ******** 2026-01-30 05:57:04.559872 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.559938 | orchestrator | 2026-01-30 05:57:04.559951 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 05:57:04.559962 | orchestrator | Friday 30 January 2026 05:56:18 +0000 (0:00:01.140) 0:08:12.123 ******** 2026-01-30 05:57:04.559974 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.559985 | orchestrator | 2026-01-30 05:57:04.559997 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 05:57:04.560009 | orchestrator | Friday 30 January 2026 05:56:19 +0000 (0:00:01.115) 0:08:13.238 ******** 2026-01-30 05:57:04.560021 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560033 | orchestrator | 2026-01-30 05:57:04.560044 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 05:57:04.560055 | orchestrator | Friday 30 January 2026 05:56:20 +0000 (0:00:01.127) 0:08:14.365 ******** 2026-01-30 05:57:04.560068 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560097 | orchestrator | 2026-01-30 05:57:04.560110 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 05:57:04.560123 | orchestrator | Friday 30 January 2026 05:56:21 +0000 (0:00:01.105) 0:08:15.471 ******** 2026-01-30 05:57:04.560136 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560149 | orchestrator | 2026-01-30 05:57:04.560178 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 05:57:04.560192 | orchestrator | Friday 30 January 2026 05:56:22 +0000 (0:00:01.105) 0:08:16.577 ******** 2026-01-30 05:57:04.560205 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560218 | orchestrator | 2026-01-30 05:57:04.560231 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 05:57:04.560245 | orchestrator | Friday 30 January 2026 05:56:24 +0000 (0:00:01.124) 0:08:17.701 ******** 2026-01-30 05:57:04.560258 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560270 | orchestrator | 2026-01-30 05:57:04.560294 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 05:57:04.560309 | orchestrator | Friday 30 January 2026 05:56:25 +0000 (0:00:01.108) 0:08:18.810 ******** 2026-01-30 05:57:04.560323 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560336 | orchestrator | 2026-01-30 05:57:04.560349 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 05:57:04.560363 | orchestrator | Friday 30 January 2026 05:56:26 +0000 (0:00:01.129) 0:08:19.939 ******** 2026-01-30 05:57:04.560378 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560390 | orchestrator | 2026-01-30 05:57:04.560404 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 05:57:04.560416 | orchestrator | Friday 30 January 2026 05:56:27 +0000 (0:00:01.128) 0:08:21.068 ******** 2026-01-30 05:57:04.560429 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560442 | orchestrator | 2026-01-30 05:57:04.560454 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 05:57:04.560465 | orchestrator | Friday 30 January 2026 05:56:28 +0000 (0:00:01.160) 0:08:22.228 ******** 2026-01-30 05:57:04.560477 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560488 | orchestrator | 2026-01-30 05:57:04.560500 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 05:57:04.560513 | orchestrator | Friday 30 January 2026 05:56:29 +0000 (0:00:01.100) 0:08:23.329 ******** 2026-01-30 05:57:04.560526 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560538 | orchestrator | 2026-01-30 05:57:04.560551 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 05:57:04.560563 | orchestrator | Friday 30 January 2026 05:56:30 +0000 (0:00:01.103) 0:08:24.433 ******** 2026-01-30 05:57:04.560591 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560605 | orchestrator | 2026-01-30 05:57:04.560619 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 05:57:04.560632 | orchestrator | Friday 30 January 2026 05:56:32 +0000 (0:00:01.224) 0:08:25.657 ******** 2026-01-30 05:57:04.560644 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560657 | orchestrator | 2026-01-30 05:57:04.560669 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 05:57:04.560681 | orchestrator | Friday 30 January 2026 05:56:33 +0000 (0:00:01.107) 0:08:26.765 ******** 2026-01-30 05:57:04.560693 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560706 | orchestrator | 2026-01-30 05:57:04.560718 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 05:57:04.560730 | orchestrator | Friday 30 January 2026 05:56:34 +0000 (0:00:01.186) 0:08:27.951 ******** 2026-01-30 05:57:04.560742 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560755 | orchestrator | 2026-01-30 05:57:04.560767 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 05:57:04.560779 | orchestrator | Friday 30 January 2026 05:56:35 +0000 (0:00:01.122) 0:08:29.074 ******** 2026-01-30 05:57:04.560791 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560802 | orchestrator | 2026-01-30 05:57:04.560814 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 05:57:04.560828 | orchestrator | Friday 30 January 2026 05:56:36 +0000 (0:00:01.096) 0:08:30.171 ******** 2026-01-30 05:57:04.560841 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560852 | orchestrator | 2026-01-30 05:57:04.560865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 05:57:04.560878 | orchestrator | Friday 30 January 2026 05:56:37 +0000 (0:00:01.137) 0:08:31.309 ******** 2026-01-30 05:57:04.560918 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560929 | orchestrator | 2026-01-30 05:57:04.560964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 05:57:04.560976 | orchestrator | Friday 30 January 2026 05:56:38 +0000 (0:00:01.182) 0:08:32.492 ******** 2026-01-30 05:57:04.560986 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.560998 | orchestrator | 2026-01-30 05:57:04.561008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 05:57:04.561019 | orchestrator | Friday 30 January 2026 05:56:40 +0000 (0:00:01.154) 0:08:33.646 ******** 2026-01-30 05:57:04.561029 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561127 | orchestrator | 2026-01-30 05:57:04.561141 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 05:57:04.561152 | orchestrator | Friday 30 January 2026 05:56:41 +0000 (0:00:01.160) 0:08:34.807 ******** 2026-01-30 05:57:04.561164 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 05:57:04.561176 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 05:57:04.561186 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 05:57:04.561198 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561208 | orchestrator | 2026-01-30 05:57:04.561220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 05:57:04.561231 | orchestrator | Friday 30 January 2026 05:56:42 +0000 (0:00:01.436) 0:08:36.243 ******** 2026-01-30 05:57:04.561241 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 05:57:04.561251 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 05:57:04.561274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 05:57:04.561284 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561295 | orchestrator | 2026-01-30 05:57:04.561306 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 05:57:04.561317 | orchestrator | Friday 30 January 2026 05:56:44 +0000 (0:00:01.415) 0:08:37.659 ******** 2026-01-30 05:57:04.561343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 05:57:04.561354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 05:57:04.561365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 05:57:04.561375 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561386 | orchestrator | 2026-01-30 05:57:04.561396 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 05:57:04.561407 | orchestrator | Friday 30 January 2026 05:56:45 +0000 (0:00:01.450) 0:08:39.109 ******** 2026-01-30 05:57:04.561417 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561428 | orchestrator | 2026-01-30 05:57:04.561438 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 05:57:04.561449 | orchestrator | Friday 30 January 2026 05:56:46 +0000 (0:00:01.098) 0:08:40.208 ******** 2026-01-30 05:57:04.561460 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-30 05:57:04.561471 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561482 | orchestrator | 2026-01-30 05:57:04.561492 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 05:57:04.561503 | orchestrator | Friday 30 January 2026 05:56:47 +0000 (0:00:01.309) 0:08:41.518 ******** 2026-01-30 05:57:04.561515 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561526 | orchestrator | 2026-01-30 05:57:04.561539 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-30 05:57:04.561551 | orchestrator | Friday 30 January 2026 05:56:49 +0000 (0:00:01.780) 0:08:43.299 ******** 2026-01-30 05:57:04.561563 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561575 | orchestrator | 2026-01-30 05:57:04.561586 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-30 05:57:04.561599 | orchestrator | Friday 30 January 2026 05:56:50 +0000 (0:00:01.176) 0:08:44.475 ******** 2026-01-30 05:57:04.561611 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-01-30 05:57:04.561623 | orchestrator | 2026-01-30 05:57:04.561633 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-30 05:57:04.561643 | orchestrator | Friday 30 January 2026 05:56:52 +0000 (0:00:01.521) 0:08:45.996 ******** 2026-01-30 05:57:04.561655 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-01-30 05:57:04.561665 | orchestrator | 2026-01-30 05:57:04.561675 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-30 05:57:04.561687 | orchestrator | Friday 30 January 2026 05:56:55 +0000 (0:00:03.566) 0:08:49.563 ******** 2026-01-30 05:57:04.561699 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:57:04.561709 | orchestrator | 2026-01-30 05:57:04.561720 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-30 05:57:04.561731 | orchestrator | Friday 30 January 2026 05:56:57 +0000 (0:00:01.150) 0:08:50.714 ******** 2026-01-30 05:57:04.561742 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561754 | orchestrator | 2026-01-30 05:57:04.561765 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-30 05:57:04.561776 | orchestrator | Friday 30 January 2026 05:56:58 +0000 (0:00:01.140) 0:08:51.854 ******** 2026-01-30 05:57:04.561786 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561797 | orchestrator | 2026-01-30 05:57:04.561807 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-30 05:57:04.561819 | orchestrator | Friday 30 January 2026 05:56:59 +0000 (0:00:01.137) 0:08:52.991 ******** 2026-01-30 05:57:04.561831 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:57:04.561844 | orchestrator | 2026-01-30 05:57:04.561855 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-30 05:57:04.561868 | orchestrator | Friday 30 January 2026 05:57:01 +0000 (0:00:02.056) 0:08:55.048 ******** 2026-01-30 05:57:04.561914 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561928 | orchestrator | 2026-01-30 05:57:04.561940 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-30 05:57:04.561968 | orchestrator | Friday 30 January 2026 05:57:03 +0000 (0:00:01.583) 0:08:56.631 ******** 2026-01-30 05:57:04.561979 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:57:04.561989 | orchestrator | 2026-01-30 05:57:04.562111 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-30 05:58:03.823654 | orchestrator | Friday 30 January 2026 05:57:04 +0000 (0:00:01.524) 0:08:58.156 ******** 2026-01-30 05:58:03.823735 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823742 | orchestrator | 2026-01-30 05:58:03.823747 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-30 05:58:03.823751 | orchestrator | Friday 30 January 2026 05:57:06 +0000 (0:00:01.569) 0:08:59.725 ******** 2026-01-30 05:58:03.823755 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823759 | orchestrator | 2026-01-30 05:58:03.823764 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-30 05:58:03.823768 | orchestrator | Friday 30 January 2026 05:57:07 +0000 (0:00:01.786) 0:09:01.511 ******** 2026-01-30 05:58:03.823772 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823776 | orchestrator | 2026-01-30 05:58:03.823780 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-30 05:58:03.823783 | orchestrator | Friday 30 January 2026 05:57:09 +0000 (0:00:01.780) 0:09:03.292 ******** 2026-01-30 05:58:03.823787 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 05:58:03.823792 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 05:58:03.823796 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 05:58:03.823800 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-01-30 05:58:03.823804 | orchestrator | 2026-01-30 05:58:03.823808 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-30 05:58:03.823823 | orchestrator | Friday 30 January 2026 05:57:13 +0000 (0:00:04.081) 0:09:07.373 ******** 2026-01-30 05:58:03.823827 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:58:03.823831 | orchestrator | 2026-01-30 05:58:03.823834 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-30 05:58:03.823838 | orchestrator | Friday 30 January 2026 05:57:15 +0000 (0:00:02.121) 0:09:09.495 ******** 2026-01-30 05:58:03.823842 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823846 | orchestrator | 2026-01-30 05:58:03.823849 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-30 05:58:03.823853 | orchestrator | Friday 30 January 2026 05:57:17 +0000 (0:00:01.133) 0:09:10.629 ******** 2026-01-30 05:58:03.823857 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823861 | orchestrator | 2026-01-30 05:58:03.823864 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-30 05:58:03.823868 | orchestrator | Friday 30 January 2026 05:57:18 +0000 (0:00:01.155) 0:09:11.785 ******** 2026-01-30 05:58:03.823872 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823876 | orchestrator | 2026-01-30 05:58:03.823879 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-30 05:58:03.823883 | orchestrator | Friday 30 January 2026 05:57:20 +0000 (0:00:02.238) 0:09:14.023 ******** 2026-01-30 05:58:03.823887 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.823891 | orchestrator | 2026-01-30 05:58:03.823894 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-30 05:58:03.823898 | orchestrator | Friday 30 January 2026 05:57:21 +0000 (0:00:01.503) 0:09:15.527 ******** 2026-01-30 05:58:03.823941 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:03.823946 | orchestrator | 2026-01-30 05:58:03.823950 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-30 05:58:03.823954 | orchestrator | Friday 30 January 2026 05:57:23 +0000 (0:00:01.162) 0:09:16.690 ******** 2026-01-30 05:58:03.823958 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-01-30 05:58:03.823963 | orchestrator | 2026-01-30 05:58:03.823967 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-30 05:58:03.823991 | orchestrator | Friday 30 January 2026 05:57:24 +0000 (0:00:01.430) 0:09:18.121 ******** 2026-01-30 05:58:03.823997 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:03.824003 | orchestrator | 2026-01-30 05:58:03.824009 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-30 05:58:03.824015 | orchestrator | Friday 30 January 2026 05:57:25 +0000 (0:00:01.127) 0:09:19.248 ******** 2026-01-30 05:58:03.824020 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:03.824026 | orchestrator | 2026-01-30 05:58:03.824032 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-30 05:58:03.824037 | orchestrator | Friday 30 January 2026 05:57:26 +0000 (0:00:01.137) 0:09:20.386 ******** 2026-01-30 05:58:03.824043 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-01-30 05:58:03.824049 | orchestrator | 2026-01-30 05:58:03.824055 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-30 05:58:03.824061 | orchestrator | Friday 30 January 2026 05:57:28 +0000 (0:00:01.504) 0:09:21.891 ******** 2026-01-30 05:58:03.824067 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.824073 | orchestrator | 2026-01-30 05:58:03.824079 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-30 05:58:03.824085 | orchestrator | Friday 30 January 2026 05:57:30 +0000 (0:00:02.337) 0:09:24.228 ******** 2026-01-30 05:58:03.824090 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.824097 | orchestrator | 2026-01-30 05:58:03.824103 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-30 05:58:03.824109 | orchestrator | Friday 30 January 2026 05:57:32 +0000 (0:00:02.000) 0:09:26.228 ******** 2026-01-30 05:58:03.824116 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.824122 | orchestrator | 2026-01-30 05:58:03.824130 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-30 05:58:03.824138 | orchestrator | Friday 30 January 2026 05:57:35 +0000 (0:00:02.587) 0:09:28.816 ******** 2026-01-30 05:58:03.824144 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:58:03.824150 | orchestrator | 2026-01-30 05:58:03.824155 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-30 05:58:03.824161 | orchestrator | Friday 30 January 2026 05:57:38 +0000 (0:00:03.586) 0:09:32.402 ******** 2026-01-30 05:58:03.824168 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-01-30 05:58:03.824174 | orchestrator | 2026-01-30 05:58:03.824216 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-30 05:58:03.824224 | orchestrator | Friday 30 January 2026 05:57:40 +0000 (0:00:01.528) 0:09:33.931 ******** 2026-01-30 05:58:03.824231 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.824237 | orchestrator | 2026-01-30 05:58:03.824243 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-30 05:58:03.824250 | orchestrator | Friday 30 January 2026 05:57:42 +0000 (0:00:02.311) 0:09:36.243 ******** 2026-01-30 05:58:03.824256 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:03.824262 | orchestrator | 2026-01-30 05:58:03.824269 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-30 05:58:03.824275 | orchestrator | Friday 30 January 2026 05:57:45 +0000 (0:00:03.215) 0:09:39.458 ******** 2026-01-30 05:58:03.824282 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:03.824288 | orchestrator | 2026-01-30 05:58:03.824295 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-30 05:58:03.824302 | orchestrator | Friday 30 January 2026 05:57:46 +0000 (0:00:01.118) 0:09:40.577 ******** 2026-01-30 05:58:03.824316 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-30 05:58:03.824330 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-30 05:58:03.824335 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-30 05:58:03.824339 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-30 05:58:03.824346 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-30 05:58:03.824352 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}])  2026-01-30 05:58:03.824358 | orchestrator | 2026-01-30 05:58:03.824362 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-01-30 05:58:03.824367 | orchestrator | Friday 30 January 2026 05:57:57 +0000 (0:00:10.645) 0:09:51.223 ******** 2026-01-30 05:58:03.824371 | orchestrator | changed: [testbed-node-0] 2026-01-30 05:58:03.824376 | orchestrator | 2026-01-30 05:58:03.824380 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 05:58:03.824385 | orchestrator | Friday 30 January 2026 05:58:00 +0000 (0:00:02.645) 0:09:53.869 ******** 2026-01-30 05:58:03.824389 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 05:58:03.824394 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 05:58:03.824398 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 05:58:03.824403 | orchestrator | 2026-01-30 05:58:03.824407 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 05:58:03.824411 | orchestrator | Friday 30 January 2026 05:58:02 +0000 (0:00:02.188) 0:09:56.058 ******** 2026-01-30 05:58:03.824416 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 05:58:03.824421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 05:58:03.824425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 05:58:03.824430 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:03.824434 | orchestrator | 2026-01-30 05:58:03.824438 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-01-30 05:58:03.824447 | orchestrator | Friday 30 January 2026 05:58:03 +0000 (0:00:01.359) 0:09:57.417 ******** 2026-01-30 05:58:33.019681 | orchestrator | skipping: [testbed-node-0] 2026-01-30 05:58:33.019794 | orchestrator | 2026-01-30 05:58:33.019810 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-01-30 05:58:33.019823 | orchestrator | Friday 30 January 2026 05:58:04 +0000 (0:00:01.130) 0:09:58.548 ******** 2026-01-30 05:58:33.019836 | orchestrator | ok: [testbed-node-0] 2026-01-30 05:58:33.019873 | orchestrator | 2026-01-30 05:58:33.019885 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-01-30 05:58:33.019896 | orchestrator | 2026-01-30 05:58:33.019907 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-01-30 05:58:33.020001 | orchestrator | Friday 30 January 2026 05:58:07 +0000 (0:00:02.549) 0:10:01.097 ******** 2026-01-30 05:58:33.020015 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020026 | orchestrator | 2026-01-30 05:58:33.020037 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-01-30 05:58:33.020049 | orchestrator | Friday 30 January 2026 05:58:08 +0000 (0:00:01.219) 0:10:02.317 ******** 2026-01-30 05:58:33.020061 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020071 | orchestrator | 2026-01-30 05:58:33.020082 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-01-30 05:58:33.020093 | orchestrator | Friday 30 January 2026 05:58:09 +0000 (0:00:00.756) 0:10:03.073 ******** 2026-01-30 05:58:33.020104 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:33.020115 | orchestrator | 2026-01-30 05:58:33.020126 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-01-30 05:58:33.020153 | orchestrator | Friday 30 January 2026 05:58:10 +0000 (0:00:00.767) 0:10:03.841 ******** 2026-01-30 05:58:33.020165 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020176 | orchestrator | 2026-01-30 05:58:33.020186 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 05:58:33.020197 | orchestrator | Friday 30 January 2026 05:58:11 +0000 (0:00:00.780) 0:10:04.622 ******** 2026-01-30 05:58:33.020208 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-01-30 05:58:33.020221 | orchestrator | 2026-01-30 05:58:33.020233 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 05:58:33.020246 | orchestrator | Friday 30 January 2026 05:58:12 +0000 (0:00:01.095) 0:10:05.718 ******** 2026-01-30 05:58:33.020258 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020270 | orchestrator | 2026-01-30 05:58:33.020283 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 05:58:33.020295 | orchestrator | Friday 30 January 2026 05:58:13 +0000 (0:00:01.482) 0:10:07.200 ******** 2026-01-30 05:58:33.020308 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020321 | orchestrator | 2026-01-30 05:58:33.020333 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 05:58:33.020345 | orchestrator | Friday 30 January 2026 05:58:14 +0000 (0:00:01.201) 0:10:08.401 ******** 2026-01-30 05:58:33.020358 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020370 | orchestrator | 2026-01-30 05:58:33.020382 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 05:58:33.020394 | orchestrator | Friday 30 January 2026 05:58:16 +0000 (0:00:01.541) 0:10:09.943 ******** 2026-01-30 05:58:33.020406 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020418 | orchestrator | 2026-01-30 05:58:33.020431 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 05:58:33.020443 | orchestrator | Friday 30 January 2026 05:58:17 +0000 (0:00:01.107) 0:10:11.051 ******** 2026-01-30 05:58:33.020456 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020468 | orchestrator | 2026-01-30 05:58:33.020480 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 05:58:33.020492 | orchestrator | Friday 30 January 2026 05:58:18 +0000 (0:00:01.180) 0:10:12.232 ******** 2026-01-30 05:58:33.020505 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020517 | orchestrator | 2026-01-30 05:58:33.020530 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 05:58:33.020542 | orchestrator | Friday 30 January 2026 05:58:19 +0000 (0:00:01.153) 0:10:13.385 ******** 2026-01-30 05:58:33.020555 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:33.020568 | orchestrator | 2026-01-30 05:58:33.020581 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 05:58:33.020602 | orchestrator | Friday 30 January 2026 05:58:20 +0000 (0:00:01.115) 0:10:14.501 ******** 2026-01-30 05:58:33.020613 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020624 | orchestrator | 2026-01-30 05:58:33.020634 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 05:58:33.020645 | orchestrator | Friday 30 January 2026 05:58:22 +0000 (0:00:01.117) 0:10:15.618 ******** 2026-01-30 05:58:33.020656 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 05:58:33.020666 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:58:33.020677 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:58:33.020688 | orchestrator | 2026-01-30 05:58:33.020699 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 05:58:33.020709 | orchestrator | Friday 30 January 2026 05:58:23 +0000 (0:00:01.660) 0:10:17.279 ******** 2026-01-30 05:58:33.020720 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:33.020731 | orchestrator | 2026-01-30 05:58:33.020741 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 05:58:33.020752 | orchestrator | Friday 30 January 2026 05:58:24 +0000 (0:00:01.276) 0:10:18.556 ******** 2026-01-30 05:58:33.020763 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 05:58:33.020773 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:58:33.020784 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:58:33.020795 | orchestrator | 2026-01-30 05:58:33.020805 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 05:58:33.020816 | orchestrator | Friday 30 January 2026 05:58:27 +0000 (0:00:02.964) 0:10:21.521 ******** 2026-01-30 05:58:33.020845 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 05:58:33.020857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 05:58:33.020868 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 05:58:33.020879 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:33.020890 | orchestrator | 2026-01-30 05:58:33.020900 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 05:58:33.020911 | orchestrator | Friday 30 January 2026 05:58:29 +0000 (0:00:01.425) 0:10:22.947 ******** 2026-01-30 05:58:33.020952 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.020982 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.021010 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.021022 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:33.021033 | orchestrator | 2026-01-30 05:58:33.021044 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 05:58:33.021054 | orchestrator | Friday 30 January 2026 05:58:30 +0000 (0:00:01.472) 0:10:24.419 ******** 2026-01-30 05:58:33.021067 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.021082 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.021101 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:33.021113 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:33.021124 | orchestrator | 2026-01-30 05:58:33.021135 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 05:58:33.021146 | orchestrator | Friday 30 January 2026 05:58:31 +0000 (0:00:01.055) 0:10:25.475 ******** 2026-01-30 05:58:33.021159 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 05:58:25.516964', 'end': '2026-01-30 05:58:25.561735', 'delta': '0:00:00.044771', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 05:58:33.021183 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b97e426bfe4f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 05:58:26.110356', 'end': '2026-01-30 05:58:26.164777', 'delta': '0:00:00.054421', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b97e426bfe4f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 05:58:51.469455 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1f4acb9ff46e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 05:58:26.755450', 'end': '2026-01-30 05:58:26.803886', 'delta': '0:00:00.048436', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f4acb9ff46e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 05:58:51.469551 | orchestrator | 2026-01-30 05:58:51.469562 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 05:58:51.469585 | orchestrator | Friday 30 January 2026 05:58:32 +0000 (0:00:01.133) 0:10:26.609 ******** 2026-01-30 05:58:51.469593 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:51.469602 | orchestrator | 2026-01-30 05:58:51.469610 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 05:58:51.469617 | orchestrator | Friday 30 January 2026 05:58:34 +0000 (0:00:01.190) 0:10:27.799 ******** 2026-01-30 05:58:51.469642 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469650 | orchestrator | 2026-01-30 05:58:51.469657 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 05:58:51.469664 | orchestrator | Friday 30 January 2026 05:58:35 +0000 (0:00:01.232) 0:10:29.032 ******** 2026-01-30 05:58:51.469672 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:51.469679 | orchestrator | 2026-01-30 05:58:51.469686 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 05:58:51.469693 | orchestrator | Friday 30 January 2026 05:58:36 +0000 (0:00:01.070) 0:10:30.102 ******** 2026-01-30 05:58:51.469701 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-01-30 05:58:51.469708 | orchestrator | 2026-01-30 05:58:51.469715 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:58:51.469722 | orchestrator | Friday 30 January 2026 05:58:38 +0000 (0:00:02.278) 0:10:32.381 ******** 2026-01-30 05:58:51.469729 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:58:51.469736 | orchestrator | 2026-01-30 05:58:51.469743 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 05:58:51.469750 | orchestrator | Friday 30 January 2026 05:58:39 +0000 (0:00:01.148) 0:10:33.530 ******** 2026-01-30 05:58:51.469757 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469765 | orchestrator | 2026-01-30 05:58:51.469772 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 05:58:51.469779 | orchestrator | Friday 30 January 2026 05:58:41 +0000 (0:00:01.175) 0:10:34.706 ******** 2026-01-30 05:58:51.469786 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469793 | orchestrator | 2026-01-30 05:58:51.469800 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 05:58:51.469807 | orchestrator | Friday 30 January 2026 05:58:42 +0000 (0:00:01.212) 0:10:35.918 ******** 2026-01-30 05:58:51.469815 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469822 | orchestrator | 2026-01-30 05:58:51.469829 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 05:58:51.469836 | orchestrator | Friday 30 January 2026 05:58:43 +0000 (0:00:01.113) 0:10:37.032 ******** 2026-01-30 05:58:51.469843 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469850 | orchestrator | 2026-01-30 05:58:51.469858 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 05:58:51.469865 | orchestrator | Friday 30 January 2026 05:58:44 +0000 (0:00:01.134) 0:10:38.166 ******** 2026-01-30 05:58:51.469872 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469879 | orchestrator | 2026-01-30 05:58:51.469886 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 05:58:51.469894 | orchestrator | Friday 30 January 2026 05:58:45 +0000 (0:00:01.168) 0:10:39.335 ******** 2026-01-30 05:58:51.469901 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469908 | orchestrator | 2026-01-30 05:58:51.469915 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 05:58:51.469964 | orchestrator | Friday 30 January 2026 05:58:46 +0000 (0:00:01.141) 0:10:40.476 ******** 2026-01-30 05:58:51.469973 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.469980 | orchestrator | 2026-01-30 05:58:51.469987 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 05:58:51.469994 | orchestrator | Friday 30 January 2026 05:58:47 +0000 (0:00:01.119) 0:10:41.595 ******** 2026-01-30 05:58:51.470001 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.470009 | orchestrator | 2026-01-30 05:58:51.470068 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 05:58:51.470078 | orchestrator | Friday 30 January 2026 05:58:49 +0000 (0:00:01.119) 0:10:42.715 ******** 2026-01-30 05:58:51.470086 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:51.470094 | orchestrator | 2026-01-30 05:58:51.470102 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 05:58:51.470111 | orchestrator | Friday 30 January 2026 05:58:50 +0000 (0:00:01.137) 0:10:43.852 ******** 2026-01-30 05:58:51.470141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 05:58:51.470186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:51.470222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 05:58:52.674530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:52.674609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 05:58:52.674618 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:58:52.674625 | orchestrator | 2026-01-30 05:58:52.674631 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 05:58:52.674638 | orchestrator | Friday 30 January 2026 05:58:51 +0000 (0:00:01.212) 0:10:45.065 ******** 2026-01-30 05:58:52.674646 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674654 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674660 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674681 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674705 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674712 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674718 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674727 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:58:52.674747 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:59:27.268931 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 05:59:27.269067 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269080 | orchestrator | 2026-01-30 05:59:27.269090 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 05:59:27.269099 | orchestrator | Friday 30 January 2026 05:58:52 +0000 (0:00:01.210) 0:10:46.275 ******** 2026-01-30 05:59:27.269108 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:59:27.269117 | orchestrator | 2026-01-30 05:59:27.269125 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 05:59:27.269133 | orchestrator | Friday 30 January 2026 05:58:54 +0000 (0:00:01.490) 0:10:47.766 ******** 2026-01-30 05:59:27.269141 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:59:27.269149 | orchestrator | 2026-01-30 05:59:27.269157 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:59:27.269164 | orchestrator | Friday 30 January 2026 05:58:55 +0000 (0:00:01.094) 0:10:48.861 ******** 2026-01-30 05:59:27.269172 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:59:27.269180 | orchestrator | 2026-01-30 05:59:27.269188 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:59:27.269196 | orchestrator | Friday 30 January 2026 05:58:56 +0000 (0:00:01.489) 0:10:50.350 ******** 2026-01-30 05:59:27.269224 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269232 | orchestrator | 2026-01-30 05:59:27.269240 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 05:59:27.269248 | orchestrator | Friday 30 January 2026 05:58:57 +0000 (0:00:01.093) 0:10:51.444 ******** 2026-01-30 05:59:27.269256 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269264 | orchestrator | 2026-01-30 05:59:27.269272 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 05:59:27.269281 | orchestrator | Friday 30 January 2026 05:58:59 +0000 (0:00:01.248) 0:10:52.693 ******** 2026-01-30 05:59:27.269294 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269307 | orchestrator | 2026-01-30 05:59:27.269319 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 05:59:27.269333 | orchestrator | Friday 30 January 2026 05:59:00 +0000 (0:00:01.128) 0:10:53.821 ******** 2026-01-30 05:59:27.269345 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-30 05:59:27.269359 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:59:27.269372 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-30 05:59:27.269385 | orchestrator | 2026-01-30 05:59:27.269397 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 05:59:27.269409 | orchestrator | Friday 30 January 2026 05:59:01 +0000 (0:00:01.678) 0:10:55.500 ******** 2026-01-30 05:59:27.269423 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 05:59:27.269438 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 05:59:27.269451 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 05:59:27.269465 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269477 | orchestrator | 2026-01-30 05:59:27.269491 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 05:59:27.269506 | orchestrator | Friday 30 January 2026 05:59:03 +0000 (0:00:01.135) 0:10:56.635 ******** 2026-01-30 05:59:27.269520 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269534 | orchestrator | 2026-01-30 05:59:27.269547 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 05:59:27.269561 | orchestrator | Friday 30 January 2026 05:59:04 +0000 (0:00:01.108) 0:10:57.743 ******** 2026-01-30 05:59:27.269575 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 05:59:27.269591 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:59:27.269606 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:59:27.269620 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:59:27.269634 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:59:27.269648 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:59:27.269657 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:59:27.269667 | orchestrator | 2026-01-30 05:59:27.269676 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 05:59:27.269685 | orchestrator | Friday 30 January 2026 05:59:06 +0000 (0:00:02.083) 0:10:59.827 ******** 2026-01-30 05:59:27.269695 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 05:59:27.269704 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:59:27.269726 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 05:59:27.269736 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 05:59:27.269762 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 05:59:27.269772 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 05:59:27.269789 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 05:59:27.269797 | orchestrator | 2026-01-30 05:59:27.269805 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-01-30 05:59:27.269813 | orchestrator | Friday 30 January 2026 05:59:08 +0000 (0:00:02.157) 0:11:01.985 ******** 2026-01-30 05:59:27.269820 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269828 | orchestrator | 2026-01-30 05:59:27.269836 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-01-30 05:59:27.269844 | orchestrator | Friday 30 January 2026 05:59:09 +0000 (0:00:00.835) 0:11:02.821 ******** 2026-01-30 05:59:27.269852 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269859 | orchestrator | 2026-01-30 05:59:27.269867 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-01-30 05:59:27.269875 | orchestrator | Friday 30 January 2026 05:59:10 +0000 (0:00:00.871) 0:11:03.692 ******** 2026-01-30 05:59:27.269883 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269891 | orchestrator | 2026-01-30 05:59:27.269898 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-01-30 05:59:27.269906 | orchestrator | Friday 30 January 2026 05:59:10 +0000 (0:00:00.808) 0:11:04.500 ******** 2026-01-30 05:59:27.269914 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269922 | orchestrator | 2026-01-30 05:59:27.269930 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-01-30 05:59:27.269960 | orchestrator | Friday 30 January 2026 05:59:12 +0000 (0:00:01.185) 0:11:05.686 ******** 2026-01-30 05:59:27.269969 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.269977 | orchestrator | 2026-01-30 05:59:27.269984 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-01-30 05:59:27.269992 | orchestrator | Friday 30 January 2026 05:59:12 +0000 (0:00:00.764) 0:11:06.451 ******** 2026-01-30 05:59:27.270000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 05:59:27.270008 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 05:59:27.270064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 05:59:27.270073 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.270080 | orchestrator | 2026-01-30 05:59:27.270088 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-01-30 05:59:27.270096 | orchestrator | Friday 30 January 2026 05:59:13 +0000 (0:00:01.040) 0:11:07.492 ******** 2026-01-30 05:59:27.270104 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-01-30 05:59:27.270111 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-01-30 05:59:27.270119 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-01-30 05:59:27.270127 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-01-30 05:59:27.270134 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-01-30 05:59:27.270142 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-01-30 05:59:27.270150 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.270158 | orchestrator | 2026-01-30 05:59:27.270165 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-01-30 05:59:27.270173 | orchestrator | Friday 30 January 2026 05:59:15 +0000 (0:00:01.316) 0:11:08.808 ******** 2026-01-30 05:59:27.270181 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:59:27.270189 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 05:59:27.270196 | orchestrator | 2026-01-30 05:59:27.270208 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-01-30 05:59:27.270222 | orchestrator | Friday 30 January 2026 05:59:18 +0000 (0:00:03.709) 0:11:12.518 ******** 2026-01-30 05:59:27.270236 | orchestrator | changed: [testbed-node-1] 2026-01-30 05:59:27.270250 | orchestrator | 2026-01-30 05:59:27.270274 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 05:59:27.270289 | orchestrator | Friday 30 January 2026 05:59:21 +0000 (0:00:02.301) 0:11:14.820 ******** 2026-01-30 05:59:27.270303 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-01-30 05:59:27.270319 | orchestrator | 2026-01-30 05:59:27.270334 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 05:59:27.270349 | orchestrator | Friday 30 January 2026 05:59:22 +0000 (0:00:01.133) 0:11:15.953 ******** 2026-01-30 05:59:27.270358 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-01-30 05:59:27.270366 | orchestrator | 2026-01-30 05:59:27.270374 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 05:59:27.270381 | orchestrator | Friday 30 January 2026 05:59:23 +0000 (0:00:01.111) 0:11:17.064 ******** 2026-01-30 05:59:27.270389 | orchestrator | ok: [testbed-node-1] 2026-01-30 05:59:27.270397 | orchestrator | 2026-01-30 05:59:27.270411 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 05:59:27.270424 | orchestrator | Friday 30 January 2026 05:59:25 +0000 (0:00:01.554) 0:11:18.619 ******** 2026-01-30 05:59:27.270437 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.270450 | orchestrator | 2026-01-30 05:59:27.270463 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 05:59:27.270476 | orchestrator | Friday 30 January 2026 05:59:26 +0000 (0:00:01.120) 0:11:19.739 ******** 2026-01-30 05:59:27.270489 | orchestrator | skipping: [testbed-node-1] 2026-01-30 05:59:27.270502 | orchestrator | 2026-01-30 05:59:27.270516 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 05:59:27.270541 | orchestrator | Friday 30 January 2026 05:59:27 +0000 (0:00:01.111) 0:11:20.851 ******** 2026-01-30 06:00:09.512891 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513003 | orchestrator | 2026-01-30 06:00:09.513013 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:00:09.513019 | orchestrator | Friday 30 January 2026 05:59:28 +0000 (0:00:01.134) 0:11:21.986 ******** 2026-01-30 06:00:09.513024 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513028 | orchestrator | 2026-01-30 06:00:09.513033 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:00:09.513037 | orchestrator | Friday 30 January 2026 05:59:29 +0000 (0:00:01.538) 0:11:23.524 ******** 2026-01-30 06:00:09.513041 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513045 | orchestrator | 2026-01-30 06:00:09.513050 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:00:09.513053 | orchestrator | Friday 30 January 2026 05:59:31 +0000 (0:00:01.132) 0:11:24.657 ******** 2026-01-30 06:00:09.513057 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513061 | orchestrator | 2026-01-30 06:00:09.513065 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:00:09.513069 | orchestrator | Friday 30 January 2026 05:59:32 +0000 (0:00:01.126) 0:11:25.783 ******** 2026-01-30 06:00:09.513073 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513076 | orchestrator | 2026-01-30 06:00:09.513080 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:00:09.513084 | orchestrator | Friday 30 January 2026 05:59:33 +0000 (0:00:01.592) 0:11:27.375 ******** 2026-01-30 06:00:09.513088 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513092 | orchestrator | 2026-01-30 06:00:09.513095 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:00:09.513099 | orchestrator | Friday 30 January 2026 05:59:35 +0000 (0:00:01.624) 0:11:29.000 ******** 2026-01-30 06:00:09.513103 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513107 | orchestrator | 2026-01-30 06:00:09.513110 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:00:09.513114 | orchestrator | Friday 30 January 2026 05:59:36 +0000 (0:00:00.755) 0:11:29.755 ******** 2026-01-30 06:00:09.513133 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513137 | orchestrator | 2026-01-30 06:00:09.513141 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:00:09.513145 | orchestrator | Friday 30 January 2026 05:59:36 +0000 (0:00:00.791) 0:11:30.547 ******** 2026-01-30 06:00:09.513149 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513153 | orchestrator | 2026-01-30 06:00:09.513157 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:00:09.513161 | orchestrator | Friday 30 January 2026 05:59:37 +0000 (0:00:00.772) 0:11:31.319 ******** 2026-01-30 06:00:09.513164 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513168 | orchestrator | 2026-01-30 06:00:09.513172 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:00:09.513176 | orchestrator | Friday 30 January 2026 05:59:38 +0000 (0:00:00.799) 0:11:32.119 ******** 2026-01-30 06:00:09.513180 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513183 | orchestrator | 2026-01-30 06:00:09.513187 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:00:09.513191 | orchestrator | Friday 30 January 2026 05:59:39 +0000 (0:00:00.759) 0:11:32.878 ******** 2026-01-30 06:00:09.513195 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513198 | orchestrator | 2026-01-30 06:00:09.513202 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:00:09.513206 | orchestrator | Friday 30 January 2026 05:59:40 +0000 (0:00:00.773) 0:11:33.652 ******** 2026-01-30 06:00:09.513210 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513213 | orchestrator | 2026-01-30 06:00:09.513217 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:00:09.513221 | orchestrator | Friday 30 January 2026 05:59:40 +0000 (0:00:00.802) 0:11:34.454 ******** 2026-01-30 06:00:09.513225 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513228 | orchestrator | 2026-01-30 06:00:09.513232 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:00:09.513236 | orchestrator | Friday 30 January 2026 05:59:41 +0000 (0:00:00.895) 0:11:35.350 ******** 2026-01-30 06:00:09.513240 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513243 | orchestrator | 2026-01-30 06:00:09.513247 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:00:09.513251 | orchestrator | Friday 30 January 2026 05:59:42 +0000 (0:00:00.781) 0:11:36.132 ******** 2026-01-30 06:00:09.513262 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513272 | orchestrator | 2026-01-30 06:00:09.513276 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:00:09.513280 | orchestrator | Friday 30 January 2026 05:59:43 +0000 (0:00:00.799) 0:11:36.931 ******** 2026-01-30 06:00:09.513284 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513287 | orchestrator | 2026-01-30 06:00:09.513291 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:00:09.513295 | orchestrator | Friday 30 January 2026 05:59:44 +0000 (0:00:00.774) 0:11:37.706 ******** 2026-01-30 06:00:09.513299 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513302 | orchestrator | 2026-01-30 06:00:09.513337 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:00:09.513341 | orchestrator | Friday 30 January 2026 05:59:44 +0000 (0:00:00.794) 0:11:38.501 ******** 2026-01-30 06:00:09.513345 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513349 | orchestrator | 2026-01-30 06:00:09.513353 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:00:09.513357 | orchestrator | Friday 30 January 2026 05:59:45 +0000 (0:00:00.776) 0:11:39.278 ******** 2026-01-30 06:00:09.513360 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513364 | orchestrator | 2026-01-30 06:00:09.513371 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:00:09.513374 | orchestrator | Friday 30 January 2026 05:59:46 +0000 (0:00:00.802) 0:11:40.081 ******** 2026-01-30 06:00:09.513382 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513386 | orchestrator | 2026-01-30 06:00:09.513401 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:00:09.513405 | orchestrator | Friday 30 January 2026 05:59:47 +0000 (0:00:00.767) 0:11:40.848 ******** 2026-01-30 06:00:09.513409 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513412 | orchestrator | 2026-01-30 06:00:09.513416 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:00:09.513420 | orchestrator | Friday 30 January 2026 05:59:48 +0000 (0:00:00.823) 0:11:41.672 ******** 2026-01-30 06:00:09.513424 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513428 | orchestrator | 2026-01-30 06:00:09.513432 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:00:09.513436 | orchestrator | Friday 30 January 2026 05:59:48 +0000 (0:00:00.828) 0:11:42.501 ******** 2026-01-30 06:00:09.513440 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513444 | orchestrator | 2026-01-30 06:00:09.513449 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:00:09.513453 | orchestrator | Friday 30 January 2026 05:59:49 +0000 (0:00:00.764) 0:11:43.265 ******** 2026-01-30 06:00:09.513458 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513462 | orchestrator | 2026-01-30 06:00:09.513467 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:00:09.513471 | orchestrator | Friday 30 January 2026 05:59:50 +0000 (0:00:00.783) 0:11:44.049 ******** 2026-01-30 06:00:09.513476 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513480 | orchestrator | 2026-01-30 06:00:09.513485 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:00:09.513489 | orchestrator | Friday 30 January 2026 05:59:51 +0000 (0:00:00.769) 0:11:44.819 ******** 2026-01-30 06:00:09.513493 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513497 | orchestrator | 2026-01-30 06:00:09.513502 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:00:09.513506 | orchestrator | Friday 30 January 2026 05:59:52 +0000 (0:00:00.814) 0:11:45.633 ******** 2026-01-30 06:00:09.513511 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513515 | orchestrator | 2026-01-30 06:00:09.513520 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:00:09.513524 | orchestrator | Friday 30 January 2026 05:59:52 +0000 (0:00:00.763) 0:11:46.397 ******** 2026-01-30 06:00:09.513529 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513533 | orchestrator | 2026-01-30 06:00:09.513538 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:00:09.513542 | orchestrator | Friday 30 January 2026 05:59:54 +0000 (0:00:01.679) 0:11:48.076 ******** 2026-01-30 06:00:09.513547 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513551 | orchestrator | 2026-01-30 06:00:09.513556 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:00:09.513560 | orchestrator | Friday 30 January 2026 05:59:56 +0000 (0:00:02.174) 0:11:50.250 ******** 2026-01-30 06:00:09.513565 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-01-30 06:00:09.513570 | orchestrator | 2026-01-30 06:00:09.513575 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:00:09.513579 | orchestrator | Friday 30 January 2026 05:59:57 +0000 (0:00:01.127) 0:11:51.378 ******** 2026-01-30 06:00:09.513583 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513588 | orchestrator | 2026-01-30 06:00:09.513592 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:00:09.513597 | orchestrator | Friday 30 January 2026 05:59:58 +0000 (0:00:01.146) 0:11:52.525 ******** 2026-01-30 06:00:09.513601 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513605 | orchestrator | 2026-01-30 06:00:09.513610 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:00:09.513617 | orchestrator | Friday 30 January 2026 06:00:00 +0000 (0:00:01.161) 0:11:53.686 ******** 2026-01-30 06:00:09.513621 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:00:09.513626 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:00:09.513630 | orchestrator | 2026-01-30 06:00:09.513635 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:00:09.513639 | orchestrator | Friday 30 January 2026 06:00:02 +0000 (0:00:02.412) 0:11:56.099 ******** 2026-01-30 06:00:09.513644 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513648 | orchestrator | 2026-01-30 06:00:09.513653 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:00:09.513657 | orchestrator | Friday 30 January 2026 06:00:03 +0000 (0:00:01.476) 0:11:57.575 ******** 2026-01-30 06:00:09.513661 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513666 | orchestrator | 2026-01-30 06:00:09.513670 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:00:09.513675 | orchestrator | Friday 30 January 2026 06:00:05 +0000 (0:00:01.135) 0:11:58.711 ******** 2026-01-30 06:00:09.513679 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513684 | orchestrator | 2026-01-30 06:00:09.513688 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:00:09.513693 | orchestrator | Friday 30 January 2026 06:00:05 +0000 (0:00:00.766) 0:11:59.477 ******** 2026-01-30 06:00:09.513697 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:09.513701 | orchestrator | 2026-01-30 06:00:09.513705 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:00:09.513709 | orchestrator | Friday 30 January 2026 06:00:06 +0000 (0:00:00.758) 0:12:00.236 ******** 2026-01-30 06:00:09.513713 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-01-30 06:00:09.513716 | orchestrator | 2026-01-30 06:00:09.513722 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:00:09.513726 | orchestrator | Friday 30 January 2026 06:00:07 +0000 (0:00:01.112) 0:12:01.349 ******** 2026-01-30 06:00:09.513730 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:09.513734 | orchestrator | 2026-01-30 06:00:09.513738 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:00:09.513744 | orchestrator | Friday 30 January 2026 06:00:09 +0000 (0:00:01.762) 0:12:03.111 ******** 2026-01-30 06:00:48.481078 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:00:48.481208 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:00:48.481225 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:00:48.481236 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481246 | orchestrator | 2026-01-30 06:00:48.481257 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:00:48.481267 | orchestrator | Friday 30 January 2026 06:00:10 +0000 (0:00:01.138) 0:12:04.250 ******** 2026-01-30 06:00:48.481277 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481288 | orchestrator | 2026-01-30 06:00:48.481306 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:00:48.481331 | orchestrator | Friday 30 January 2026 06:00:11 +0000 (0:00:01.085) 0:12:05.335 ******** 2026-01-30 06:00:48.481350 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481366 | orchestrator | 2026-01-30 06:00:48.481383 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:00:48.481397 | orchestrator | Friday 30 January 2026 06:00:12 +0000 (0:00:01.150) 0:12:06.485 ******** 2026-01-30 06:00:48.481413 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481428 | orchestrator | 2026-01-30 06:00:48.481445 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:00:48.481461 | orchestrator | Friday 30 January 2026 06:00:14 +0000 (0:00:01.175) 0:12:07.661 ******** 2026-01-30 06:00:48.481510 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481529 | orchestrator | 2026-01-30 06:00:48.481547 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:00:48.481565 | orchestrator | Friday 30 January 2026 06:00:15 +0000 (0:00:01.141) 0:12:08.802 ******** 2026-01-30 06:00:48.481582 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481596 | orchestrator | 2026-01-30 06:00:48.481608 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:00:48.481620 | orchestrator | Friday 30 January 2026 06:00:15 +0000 (0:00:00.772) 0:12:09.575 ******** 2026-01-30 06:00:48.481631 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:48.481643 | orchestrator | 2026-01-30 06:00:48.481655 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:00:48.481666 | orchestrator | Friday 30 January 2026 06:00:18 +0000 (0:00:02.369) 0:12:11.945 ******** 2026-01-30 06:00:48.481677 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:48.481688 | orchestrator | 2026-01-30 06:00:48.481700 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:00:48.481711 | orchestrator | Friday 30 January 2026 06:00:19 +0000 (0:00:00.745) 0:12:12.690 ******** 2026-01-30 06:00:48.481722 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-01-30 06:00:48.481734 | orchestrator | 2026-01-30 06:00:48.481745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:00:48.481756 | orchestrator | Friday 30 January 2026 06:00:20 +0000 (0:00:01.188) 0:12:13.879 ******** 2026-01-30 06:00:48.481767 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481779 | orchestrator | 2026-01-30 06:00:48.481790 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:00:48.481801 | orchestrator | Friday 30 January 2026 06:00:21 +0000 (0:00:01.106) 0:12:14.986 ******** 2026-01-30 06:00:48.481814 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481825 | orchestrator | 2026-01-30 06:00:48.481835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:00:48.481845 | orchestrator | Friday 30 January 2026 06:00:22 +0000 (0:00:01.141) 0:12:16.128 ******** 2026-01-30 06:00:48.481854 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481864 | orchestrator | 2026-01-30 06:00:48.481873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:00:48.481883 | orchestrator | Friday 30 January 2026 06:00:23 +0000 (0:00:01.113) 0:12:17.242 ******** 2026-01-30 06:00:48.481892 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.481901 | orchestrator | 2026-01-30 06:00:48.482007 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:00:48.482107 | orchestrator | Friday 30 January 2026 06:00:24 +0000 (0:00:01.116) 0:12:18.359 ******** 2026-01-30 06:00:48.482119 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482128 | orchestrator | 2026-01-30 06:00:48.482138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:00:48.482148 | orchestrator | Friday 30 January 2026 06:00:25 +0000 (0:00:01.124) 0:12:19.483 ******** 2026-01-30 06:00:48.482157 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482166 | orchestrator | 2026-01-30 06:00:48.482176 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:00:48.482186 | orchestrator | Friday 30 January 2026 06:00:27 +0000 (0:00:01.131) 0:12:20.615 ******** 2026-01-30 06:00:48.482195 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482204 | orchestrator | 2026-01-30 06:00:48.482214 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:00:48.482224 | orchestrator | Friday 30 January 2026 06:00:28 +0000 (0:00:01.127) 0:12:21.742 ******** 2026-01-30 06:00:48.482233 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482242 | orchestrator | 2026-01-30 06:00:48.482252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:00:48.482274 | orchestrator | Friday 30 January 2026 06:00:29 +0000 (0:00:00.922) 0:12:22.665 ******** 2026-01-30 06:00:48.482283 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:00:48.482293 | orchestrator | 2026-01-30 06:00:48.482316 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:00:48.482326 | orchestrator | Friday 30 January 2026 06:00:29 +0000 (0:00:00.611) 0:12:23.276 ******** 2026-01-30 06:00:48.482335 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-01-30 06:00:48.482346 | orchestrator | 2026-01-30 06:00:48.482356 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:00:48.482387 | orchestrator | Friday 30 January 2026 06:00:30 +0000 (0:00:00.909) 0:12:24.185 ******** 2026-01-30 06:00:48.482398 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-01-30 06:00:48.482408 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-30 06:00:48.482417 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-30 06:00:48.482427 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-30 06:00:48.482436 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-30 06:00:48.482446 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-30 06:00:48.482455 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-30 06:00:48.482465 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:00:48.482475 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:00:48.482485 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:00:48.482494 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:00:48.482504 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:00:48.482513 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:00:48.482523 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:00:48.482532 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-01-30 06:00:48.482542 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-01-30 06:00:48.482551 | orchestrator | 2026-01-30 06:00:48.482561 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:00:48.482571 | orchestrator | Friday 30 January 2026 06:00:37 +0000 (0:00:07.046) 0:12:31.231 ******** 2026-01-30 06:00:48.482580 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482590 | orchestrator | 2026-01-30 06:00:48.482599 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:00:48.482609 | orchestrator | Friday 30 January 2026 06:00:38 +0000 (0:00:00.744) 0:12:31.976 ******** 2026-01-30 06:00:48.482618 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482628 | orchestrator | 2026-01-30 06:00:48.482637 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:00:48.482647 | orchestrator | Friday 30 January 2026 06:00:39 +0000 (0:00:00.747) 0:12:32.723 ******** 2026-01-30 06:00:48.482656 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482666 | orchestrator | 2026-01-30 06:00:48.482675 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:00:48.482685 | orchestrator | Friday 30 January 2026 06:00:39 +0000 (0:00:00.746) 0:12:33.470 ******** 2026-01-30 06:00:48.482694 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482704 | orchestrator | 2026-01-30 06:00:48.482713 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:00:48.482723 | orchestrator | Friday 30 January 2026 06:00:40 +0000 (0:00:00.744) 0:12:34.215 ******** 2026-01-30 06:00:48.482732 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482742 | orchestrator | 2026-01-30 06:00:48.482751 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:00:48.482761 | orchestrator | Friday 30 January 2026 06:00:41 +0000 (0:00:00.731) 0:12:34.946 ******** 2026-01-30 06:00:48.482777 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482787 | orchestrator | 2026-01-30 06:00:48.482796 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:00:48.482806 | orchestrator | Friday 30 January 2026 06:00:42 +0000 (0:00:00.747) 0:12:35.693 ******** 2026-01-30 06:00:48.482815 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482825 | orchestrator | 2026-01-30 06:00:48.482835 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:00:48.482844 | orchestrator | Friday 30 January 2026 06:00:42 +0000 (0:00:00.775) 0:12:36.468 ******** 2026-01-30 06:00:48.482854 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482863 | orchestrator | 2026-01-30 06:00:48.482873 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:00:48.482883 | orchestrator | Friday 30 January 2026 06:00:43 +0000 (0:00:00.764) 0:12:37.233 ******** 2026-01-30 06:00:48.482892 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482902 | orchestrator | 2026-01-30 06:00:48.482911 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:00:48.482921 | orchestrator | Friday 30 January 2026 06:00:44 +0000 (0:00:00.785) 0:12:38.018 ******** 2026-01-30 06:00:48.482930 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.482940 | orchestrator | 2026-01-30 06:00:48.482949 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:00:48.482959 | orchestrator | Friday 30 January 2026 06:00:45 +0000 (0:00:00.796) 0:12:38.815 ******** 2026-01-30 06:00:48.483025 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.483042 | orchestrator | 2026-01-30 06:00:48.483058 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:00:48.483070 | orchestrator | Friday 30 January 2026 06:00:46 +0000 (0:00:00.817) 0:12:39.632 ******** 2026-01-30 06:00:48.483079 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.483089 | orchestrator | 2026-01-30 06:00:48.483098 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:00:48.483108 | orchestrator | Friday 30 January 2026 06:00:46 +0000 (0:00:00.767) 0:12:40.400 ******** 2026-01-30 06:00:48.483123 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.483133 | orchestrator | 2026-01-30 06:00:48.483143 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:00:48.483152 | orchestrator | Friday 30 January 2026 06:00:47 +0000 (0:00:00.870) 0:12:41.271 ******** 2026-01-30 06:00:48.483161 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:00:48.483171 | orchestrator | 2026-01-30 06:00:48.483180 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:00:48.483196 | orchestrator | Friday 30 January 2026 06:00:48 +0000 (0:00:00.805) 0:12:42.076 ******** 2026-01-30 06:01:36.375081 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375191 | orchestrator | 2026-01-30 06:01:36.375204 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:01:36.375213 | orchestrator | Friday 30 January 2026 06:00:49 +0000 (0:00:00.862) 0:12:42.939 ******** 2026-01-30 06:01:36.375221 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375229 | orchestrator | 2026-01-30 06:01:36.375237 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:01:36.375244 | orchestrator | Friday 30 January 2026 06:00:50 +0000 (0:00:00.766) 0:12:43.705 ******** 2026-01-30 06:01:36.375251 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375258 | orchestrator | 2026-01-30 06:01:36.375267 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:01:36.375275 | orchestrator | Friday 30 January 2026 06:00:50 +0000 (0:00:00.757) 0:12:44.463 ******** 2026-01-30 06:01:36.375283 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375309 | orchestrator | 2026-01-30 06:01:36.375317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:01:36.375324 | orchestrator | Friday 30 January 2026 06:00:51 +0000 (0:00:00.766) 0:12:45.230 ******** 2026-01-30 06:01:36.375331 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375338 | orchestrator | 2026-01-30 06:01:36.375345 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:01:36.375353 | orchestrator | Friday 30 January 2026 06:00:52 +0000 (0:00:00.759) 0:12:45.989 ******** 2026-01-30 06:01:36.375360 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375367 | orchestrator | 2026-01-30 06:01:36.375375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:01:36.375382 | orchestrator | Friday 30 January 2026 06:00:53 +0000 (0:00:00.770) 0:12:46.760 ******** 2026-01-30 06:01:36.375389 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375396 | orchestrator | 2026-01-30 06:01:36.375404 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:01:36.375411 | orchestrator | Friday 30 January 2026 06:00:53 +0000 (0:00:00.807) 0:12:47.568 ******** 2026-01-30 06:01:36.375418 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:01:36.375425 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:01:36.375432 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:01:36.375439 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375446 | orchestrator | 2026-01-30 06:01:36.375453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:01:36.375460 | orchestrator | Friday 30 January 2026 06:00:55 +0000 (0:00:01.084) 0:12:48.652 ******** 2026-01-30 06:01:36.375467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:01:36.375475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:01:36.375482 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:01:36.375489 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375496 | orchestrator | 2026-01-30 06:01:36.375503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:01:36.375510 | orchestrator | Friday 30 January 2026 06:00:56 +0000 (0:00:01.054) 0:12:49.706 ******** 2026-01-30 06:01:36.375517 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:01:36.375525 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:01:36.375536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:01:36.375550 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375563 | orchestrator | 2026-01-30 06:01:36.375576 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:01:36.375588 | orchestrator | Friday 30 January 2026 06:00:57 +0000 (0:00:01.050) 0:12:50.757 ******** 2026-01-30 06:01:36.375601 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375614 | orchestrator | 2026-01-30 06:01:36.375628 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:01:36.375641 | orchestrator | Friday 30 January 2026 06:00:57 +0000 (0:00:00.754) 0:12:51.511 ******** 2026-01-30 06:01:36.375656 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-30 06:01:36.375669 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375679 | orchestrator | 2026-01-30 06:01:36.375687 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:01:36.375694 | orchestrator | Friday 30 January 2026 06:00:58 +0000 (0:00:00.928) 0:12:52.440 ******** 2026-01-30 06:01:36.375701 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.375708 | orchestrator | 2026-01-30 06:01:36.375716 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:01:36.375723 | orchestrator | Friday 30 January 2026 06:01:00 +0000 (0:00:01.514) 0:12:53.955 ******** 2026-01-30 06:01:36.375730 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.375737 | orchestrator | 2026-01-30 06:01:36.375755 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-30 06:01:36.375767 | orchestrator | Friday 30 January 2026 06:01:01 +0000 (0:00:00.788) 0:12:54.743 ******** 2026-01-30 06:01:36.375779 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-01-30 06:01:36.375791 | orchestrator | 2026-01-30 06:01:36.375803 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-30 06:01:36.375831 | orchestrator | Friday 30 January 2026 06:01:02 +0000 (0:00:01.128) 0:12:55.872 ******** 2026-01-30 06:01:36.375844 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:01:36.375856 | orchestrator | 2026-01-30 06:01:36.375868 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-30 06:01:36.375879 | orchestrator | Friday 30 January 2026 06:01:05 +0000 (0:00:03.246) 0:12:59.118 ******** 2026-01-30 06:01:36.375891 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.375917 | orchestrator | 2026-01-30 06:01:36.375930 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-30 06:01:36.375962 | orchestrator | Friday 30 January 2026 06:01:06 +0000 (0:00:01.181) 0:13:00.299 ******** 2026-01-30 06:01:36.376005 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376019 | orchestrator | 2026-01-30 06:01:36.376030 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-30 06:01:36.376041 | orchestrator | Friday 30 January 2026 06:01:07 +0000 (0:00:01.172) 0:13:01.472 ******** 2026-01-30 06:01:36.376052 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376064 | orchestrator | 2026-01-30 06:01:36.376074 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-30 06:01:36.376086 | orchestrator | Friday 30 January 2026 06:01:09 +0000 (0:00:01.147) 0:13:02.619 ******** 2026-01-30 06:01:36.376097 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:01:36.376107 | orchestrator | 2026-01-30 06:01:36.376118 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-30 06:01:36.376130 | orchestrator | Friday 30 January 2026 06:01:11 +0000 (0:00:02.150) 0:13:04.770 ******** 2026-01-30 06:01:36.376141 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376153 | orchestrator | 2026-01-30 06:01:36.376164 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-30 06:01:36.376175 | orchestrator | Friday 30 January 2026 06:01:12 +0000 (0:00:01.646) 0:13:06.417 ******** 2026-01-30 06:01:36.376187 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376200 | orchestrator | 2026-01-30 06:01:36.376213 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-30 06:01:36.376226 | orchestrator | Friday 30 January 2026 06:01:14 +0000 (0:00:01.467) 0:13:07.884 ******** 2026-01-30 06:01:36.376239 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376250 | orchestrator | 2026-01-30 06:01:36.376263 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-30 06:01:36.376274 | orchestrator | Friday 30 January 2026 06:01:15 +0000 (0:00:01.643) 0:13:09.528 ******** 2026-01-30 06:01:36.376287 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:01:36.376298 | orchestrator | 2026-01-30 06:01:36.376310 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-30 06:01:36.376323 | orchestrator | Friday 30 January 2026 06:01:17 +0000 (0:00:01.596) 0:13:11.124 ******** 2026-01-30 06:01:36.376335 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:01:36.376347 | orchestrator | 2026-01-30 06:01:36.376359 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-30 06:01:36.376371 | orchestrator | Friday 30 January 2026 06:01:19 +0000 (0:00:01.651) 0:13:12.776 ******** 2026-01-30 06:01:36.376383 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:01:36.376395 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-30 06:01:36.376406 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 06:01:36.376432 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-30 06:01:36.376443 | orchestrator | 2026-01-30 06:01:36.376456 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-30 06:01:36.376467 | orchestrator | Friday 30 January 2026 06:01:23 +0000 (0:00:04.175) 0:13:16.951 ******** 2026-01-30 06:01:36.376478 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:01:36.376490 | orchestrator | 2026-01-30 06:01:36.376502 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-30 06:01:36.376514 | orchestrator | Friday 30 January 2026 06:01:25 +0000 (0:00:02.107) 0:13:19.059 ******** 2026-01-30 06:01:36.376525 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376537 | orchestrator | 2026-01-30 06:01:36.376549 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-30 06:01:36.376560 | orchestrator | Friday 30 January 2026 06:01:26 +0000 (0:00:01.115) 0:13:20.175 ******** 2026-01-30 06:01:36.376570 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376581 | orchestrator | 2026-01-30 06:01:36.376592 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-30 06:01:36.376603 | orchestrator | Friday 30 January 2026 06:01:27 +0000 (0:00:01.124) 0:13:21.300 ******** 2026-01-30 06:01:36.376614 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376625 | orchestrator | 2026-01-30 06:01:36.376636 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-30 06:01:36.376647 | orchestrator | Friday 30 January 2026 06:01:29 +0000 (0:00:01.831) 0:13:23.131 ******** 2026-01-30 06:01:36.376658 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:01:36.376669 | orchestrator | 2026-01-30 06:01:36.376680 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-30 06:01:36.376693 | orchestrator | Friday 30 January 2026 06:01:31 +0000 (0:00:01.515) 0:13:24.646 ******** 2026-01-30 06:01:36.376705 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.376719 | orchestrator | 2026-01-30 06:01:36.376732 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-30 06:01:36.376745 | orchestrator | Friday 30 January 2026 06:01:31 +0000 (0:00:00.772) 0:13:25.419 ******** 2026-01-30 06:01:36.376756 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-01-30 06:01:36.376769 | orchestrator | 2026-01-30 06:01:36.376781 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-30 06:01:36.376791 | orchestrator | Friday 30 January 2026 06:01:32 +0000 (0:00:01.112) 0:13:26.531 ******** 2026-01-30 06:01:36.376803 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.376814 | orchestrator | 2026-01-30 06:01:36.376825 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-30 06:01:36.376846 | orchestrator | Friday 30 January 2026 06:01:34 +0000 (0:00:01.122) 0:13:27.653 ******** 2026-01-30 06:01:36.376857 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:01:36.376867 | orchestrator | 2026-01-30 06:01:36.376877 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-30 06:01:36.376887 | orchestrator | Friday 30 January 2026 06:01:35 +0000 (0:00:01.179) 0:13:28.833 ******** 2026-01-30 06:01:36.376898 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-01-30 06:01:36.376909 | orchestrator | 2026-01-30 06:01:36.376934 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-30 06:02:47.109326 | orchestrator | Friday 30 January 2026 06:01:36 +0000 (0:00:01.136) 0:13:29.970 ******** 2026-01-30 06:02:47.109427 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.109439 | orchestrator | 2026-01-30 06:02:47.109449 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-30 06:02:47.109459 | orchestrator | Friday 30 January 2026 06:01:38 +0000 (0:00:02.487) 0:13:32.457 ******** 2026-01-30 06:02:47.109467 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.109476 | orchestrator | 2026-01-30 06:02:47.109484 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-30 06:02:47.109512 | orchestrator | Friday 30 January 2026 06:01:40 +0000 (0:00:02.081) 0:13:34.539 ******** 2026-01-30 06:02:47.109520 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.109528 | orchestrator | 2026-01-30 06:02:47.109536 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-30 06:02:47.109544 | orchestrator | Friday 30 January 2026 06:01:43 +0000 (0:00:02.645) 0:13:37.184 ******** 2026-01-30 06:02:47.109553 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:02:47.109561 | orchestrator | 2026-01-30 06:02:47.109569 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-30 06:02:47.109577 | orchestrator | Friday 30 January 2026 06:01:46 +0000 (0:00:03.404) 0:13:40.589 ******** 2026-01-30 06:02:47.109585 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-01-30 06:02:47.109593 | orchestrator | 2026-01-30 06:02:47.109601 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-30 06:02:47.109609 | orchestrator | Friday 30 January 2026 06:01:48 +0000 (0:00:01.179) 0:13:41.769 ******** 2026-01-30 06:02:47.109617 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-30 06:02:47.109625 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.109632 | orchestrator | 2026-01-30 06:02:47.109640 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-30 06:02:47.109648 | orchestrator | Friday 30 January 2026 06:02:11 +0000 (0:00:23.151) 0:14:04.920 ******** 2026-01-30 06:02:47.109656 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.109663 | orchestrator | 2026-01-30 06:02:47.109671 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-30 06:02:47.109679 | orchestrator | Friday 30 January 2026 06:02:14 +0000 (0:00:02.809) 0:14:07.729 ******** 2026-01-30 06:02:47.109686 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:02:47.109694 | orchestrator | 2026-01-30 06:02:47.109702 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-30 06:02:47.109710 | orchestrator | Friday 30 January 2026 06:02:14 +0000 (0:00:00.753) 0:14:08.482 ******** 2026-01-30 06:02:47.109720 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-30 06:02:47.109731 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-30 06:02:47.109740 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-30 06:02:47.109747 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-30 06:02:47.109771 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-30 06:02:47.109801 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}])  2026-01-30 06:02:47.109812 | orchestrator | 2026-01-30 06:02:47.109820 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-01-30 06:02:47.109828 | orchestrator | Friday 30 January 2026 06:02:25 +0000 (0:00:10.601) 0:14:19.083 ******** 2026-01-30 06:02:47.109836 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:02:47.109844 | orchestrator | 2026-01-30 06:02:47.109852 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:02:47.109859 | orchestrator | Friday 30 January 2026 06:02:27 +0000 (0:00:02.379) 0:14:21.463 ******** 2026-01-30 06:02:47.109867 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:02:47.109877 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-01-30 06:02:47.109886 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-01-30 06:02:47.109895 | orchestrator | 2026-01-30 06:02:47.109904 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:02:47.109913 | orchestrator | Friday 30 January 2026 06:02:29 +0000 (0:00:01.519) 0:14:22.982 ******** 2026-01-30 06:02:47.109923 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 06:02:47.109932 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 06:02:47.109942 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 06:02:47.109951 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:02:47.109960 | orchestrator | 2026-01-30 06:02:47.109969 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-01-30 06:02:47.109978 | orchestrator | Friday 30 January 2026 06:02:30 +0000 (0:00:01.026) 0:14:24.009 ******** 2026-01-30 06:02:47.109988 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:02:47.109997 | orchestrator | 2026-01-30 06:02:47.110085 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-01-30 06:02:47.110100 | orchestrator | Friday 30 January 2026 06:02:31 +0000 (0:00:00.764) 0:14:24.774 ******** 2026-01-30 06:02:47.110115 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:02:47.110130 | orchestrator | 2026-01-30 06:02:47.110146 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-01-30 06:02:47.110160 | orchestrator | 2026-01-30 06:02:47.110175 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-01-30 06:02:47.110185 | orchestrator | Friday 30 January 2026 06:02:33 +0000 (0:00:02.384) 0:14:27.158 ******** 2026-01-30 06:02:47.110195 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110204 | orchestrator | 2026-01-30 06:02:47.110212 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-01-30 06:02:47.110220 | orchestrator | Friday 30 January 2026 06:02:34 +0000 (0:00:01.140) 0:14:28.299 ******** 2026-01-30 06:02:47.110227 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110235 | orchestrator | 2026-01-30 06:02:47.110243 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-01-30 06:02:47.110251 | orchestrator | Friday 30 January 2026 06:02:35 +0000 (0:00:00.778) 0:14:29.078 ******** 2026-01-30 06:02:47.110259 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:02:47.110266 | orchestrator | 2026-01-30 06:02:47.110274 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-01-30 06:02:47.110282 | orchestrator | Friday 30 January 2026 06:02:36 +0000 (0:00:00.762) 0:14:29.841 ******** 2026-01-30 06:02:47.110290 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110304 | orchestrator | 2026-01-30 06:02:47.110312 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:02:47.110319 | orchestrator | Friday 30 January 2026 06:02:37 +0000 (0:00:00.774) 0:14:30.615 ******** 2026-01-30 06:02:47.110327 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-01-30 06:02:47.110335 | orchestrator | 2026-01-30 06:02:47.110343 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:02:47.110351 | orchestrator | Friday 30 January 2026 06:02:38 +0000 (0:00:01.377) 0:14:31.993 ******** 2026-01-30 06:02:47.110358 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110366 | orchestrator | 2026-01-30 06:02:47.110374 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:02:47.110382 | orchestrator | Friday 30 January 2026 06:02:39 +0000 (0:00:01.458) 0:14:33.452 ******** 2026-01-30 06:02:47.110390 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110397 | orchestrator | 2026-01-30 06:02:47.110405 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:02:47.110413 | orchestrator | Friday 30 January 2026 06:02:40 +0000 (0:00:01.135) 0:14:34.588 ******** 2026-01-30 06:02:47.110421 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110428 | orchestrator | 2026-01-30 06:02:47.110436 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:02:47.110444 | orchestrator | Friday 30 January 2026 06:02:42 +0000 (0:00:01.507) 0:14:36.096 ******** 2026-01-30 06:02:47.110452 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110465 | orchestrator | 2026-01-30 06:02:47.110478 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:02:47.110487 | orchestrator | Friday 30 January 2026 06:02:43 +0000 (0:00:01.180) 0:14:37.276 ******** 2026-01-30 06:02:47.110495 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110503 | orchestrator | 2026-01-30 06:02:47.110512 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:02:47.110527 | orchestrator | Friday 30 January 2026 06:02:44 +0000 (0:00:01.141) 0:14:38.418 ******** 2026-01-30 06:02:47.110536 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:02:47.110544 | orchestrator | 2026-01-30 06:02:47.110553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:02:47.110561 | orchestrator | Friday 30 January 2026 06:02:45 +0000 (0:00:01.138) 0:14:39.557 ******** 2026-01-30 06:02:47.110570 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:02:47.110578 | orchestrator | 2026-01-30 06:02:47.110587 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:02:47.110603 | orchestrator | Friday 30 January 2026 06:02:47 +0000 (0:00:01.145) 0:14:40.702 ******** 2026-01-30 06:03:11.942577 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:11.942693 | orchestrator | 2026-01-30 06:03:11.942710 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:03:11.942723 | orchestrator | Friday 30 January 2026 06:02:48 +0000 (0:00:01.113) 0:14:41.816 ******** 2026-01-30 06:03:11.942734 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:03:11.942746 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:03:11.942757 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:03:11.942768 | orchestrator | 2026-01-30 06:03:11.942779 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:03:11.942790 | orchestrator | Friday 30 January 2026 06:02:50 +0000 (0:00:01.937) 0:14:43.753 ******** 2026-01-30 06:03:11.942810 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:11.942829 | orchestrator | 2026-01-30 06:03:11.942849 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:03:11.942869 | orchestrator | Friday 30 January 2026 06:02:51 +0000 (0:00:01.181) 0:14:44.935 ******** 2026-01-30 06:03:11.942889 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:03:11.942937 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:03:11.942957 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:03:11.942975 | orchestrator | 2026-01-30 06:03:11.942992 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:03:11.943042 | orchestrator | Friday 30 January 2026 06:02:54 +0000 (0:00:03.076) 0:14:48.012 ******** 2026-01-30 06:03:11.943061 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:03:11.943078 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:03:11.943096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:03:11.943114 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943134 | orchestrator | 2026-01-30 06:03:11.943155 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:03:11.943175 | orchestrator | Friday 30 January 2026 06:02:56 +0000 (0:00:01.694) 0:14:49.706 ******** 2026-01-30 06:03:11.943196 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943239 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943258 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943278 | orchestrator | 2026-01-30 06:03:11.943296 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:03:11.943314 | orchestrator | Friday 30 January 2026 06:02:58 +0000 (0:00:02.046) 0:14:51.753 ******** 2026-01-30 06:03:11.943337 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943361 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943401 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:11.943422 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943438 | orchestrator | 2026-01-30 06:03:11.943450 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:03:11.943463 | orchestrator | Friday 30 January 2026 06:02:59 +0000 (0:00:01.186) 0:14:52.939 ******** 2026-01-30 06:03:11.943501 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:02:51.807955', 'end': '2026-01-30 06:02:51.863716', 'delta': '0:00:00.055761', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:03:11.943534 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:02:52.655285', 'end': '2026-01-30 06:02:52.706728', 'delta': '0:00:00.051443', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:03:11.943546 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '1f4acb9ff46e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:02:53.231335', 'end': '2026-01-30 06:02:53.274653', 'delta': '0:00:00.043318', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1f4acb9ff46e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:03:11.943557 | orchestrator | 2026-01-30 06:03:11.943568 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:03:11.943579 | orchestrator | Friday 30 January 2026 06:03:00 +0000 (0:00:01.206) 0:14:54.146 ******** 2026-01-30 06:03:11.943590 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:11.943601 | orchestrator | 2026-01-30 06:03:11.943612 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:03:11.943622 | orchestrator | Friday 30 January 2026 06:03:01 +0000 (0:00:01.260) 0:14:55.407 ******** 2026-01-30 06:03:11.943633 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943644 | orchestrator | 2026-01-30 06:03:11.943654 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:03:11.943665 | orchestrator | Friday 30 January 2026 06:03:03 +0000 (0:00:01.235) 0:14:56.642 ******** 2026-01-30 06:03:11.943676 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:11.943687 | orchestrator | 2026-01-30 06:03:11.943697 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:03:11.943708 | orchestrator | Friday 30 January 2026 06:03:04 +0000 (0:00:01.107) 0:14:57.750 ******** 2026-01-30 06:03:11.943725 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] 2026-01-30 06:03:11.943743 | orchestrator | 2026-01-30 06:03:11.943761 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:03:11.943778 | orchestrator | Friday 30 January 2026 06:03:06 +0000 (0:00:02.014) 0:14:59.765 ******** 2026-01-30 06:03:11.943796 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:11.943811 | orchestrator | 2026-01-30 06:03:11.943829 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:03:11.943848 | orchestrator | Friday 30 January 2026 06:03:07 +0000 (0:00:01.136) 0:15:00.901 ******** 2026-01-30 06:03:11.943867 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943887 | orchestrator | 2026-01-30 06:03:11.943918 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:03:11.943935 | orchestrator | Friday 30 January 2026 06:03:08 +0000 (0:00:01.178) 0:15:02.080 ******** 2026-01-30 06:03:11.943952 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.943968 | orchestrator | 2026-01-30 06:03:11.943985 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:03:11.944055 | orchestrator | Friday 30 January 2026 06:03:09 +0000 (0:00:01.196) 0:15:03.277 ******** 2026-01-30 06:03:11.944078 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.944098 | orchestrator | 2026-01-30 06:03:11.944118 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:03:11.944140 | orchestrator | Friday 30 January 2026 06:03:10 +0000 (0:00:01.136) 0:15:04.414 ******** 2026-01-30 06:03:11.944160 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:11.944180 | orchestrator | 2026-01-30 06:03:11.944199 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:03:11.944233 | orchestrator | Friday 30 January 2026 06:03:11 +0000 (0:00:01.126) 0:15:05.540 ******** 2026-01-30 06:03:20.339499 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.339649 | orchestrator | 2026-01-30 06:03:20.339678 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:03:20.339699 | orchestrator | Friday 30 January 2026 06:03:13 +0000 (0:00:01.109) 0:15:06.649 ******** 2026-01-30 06:03:20.339718 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.339729 | orchestrator | 2026-01-30 06:03:20.339741 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:03:20.339752 | orchestrator | Friday 30 January 2026 06:03:14 +0000 (0:00:01.184) 0:15:07.834 ******** 2026-01-30 06:03:20.339763 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.339774 | orchestrator | 2026-01-30 06:03:20.339785 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:03:20.339796 | orchestrator | Friday 30 January 2026 06:03:15 +0000 (0:00:01.152) 0:15:08.987 ******** 2026-01-30 06:03:20.339807 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.339818 | orchestrator | 2026-01-30 06:03:20.339829 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:03:20.339841 | orchestrator | Friday 30 January 2026 06:03:16 +0000 (0:00:01.161) 0:15:10.148 ******** 2026-01-30 06:03:20.339852 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.339863 | orchestrator | 2026-01-30 06:03:20.339873 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:03:20.339884 | orchestrator | Friday 30 January 2026 06:03:17 +0000 (0:00:01.163) 0:15:11.312 ******** 2026-01-30 06:03:20.339898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.339914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.339926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.339969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:03:20.340002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.340074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.340119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.340141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:03:20.340170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.340183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:03:20.340197 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:20.340210 | orchestrator | 2026-01-30 06:03:20.340223 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:03:20.340236 | orchestrator | Friday 30 January 2026 06:03:19 +0000 (0:00:01.384) 0:15:12.696 ******** 2026-01-30 06:03:20.340256 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:20.340282 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.998772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.998945 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.998992 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999229 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999293 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999350 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999389 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:03:27.999429 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:27.999451 | orchestrator | 2026-01-30 06:03:27.999471 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:03:27.999491 | orchestrator | Friday 30 January 2026 06:03:20 +0000 (0:00:01.246) 0:15:13.943 ******** 2026-01-30 06:03:27.999511 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:27.999533 | orchestrator | 2026-01-30 06:03:27.999551 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:03:27.999570 | orchestrator | Friday 30 January 2026 06:03:21 +0000 (0:00:01.521) 0:15:15.464 ******** 2026-01-30 06:03:27.999587 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:27.999605 | orchestrator | 2026-01-30 06:03:27.999624 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:03:27.999651 | orchestrator | Friday 30 January 2026 06:03:22 +0000 (0:00:01.130) 0:15:16.595 ******** 2026-01-30 06:03:27.999670 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:03:27.999688 | orchestrator | 2026-01-30 06:03:27.999707 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:03:27.999724 | orchestrator | Friday 30 January 2026 06:03:24 +0000 (0:00:01.513) 0:15:18.109 ******** 2026-01-30 06:03:27.999742 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:27.999760 | orchestrator | 2026-01-30 06:03:27.999777 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:03:27.999794 | orchestrator | Friday 30 January 2026 06:03:25 +0000 (0:00:01.128) 0:15:19.237 ******** 2026-01-30 06:03:27.999812 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:27.999829 | orchestrator | 2026-01-30 06:03:27.999846 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:03:27.999865 | orchestrator | Friday 30 January 2026 06:03:26 +0000 (0:00:01.225) 0:15:20.463 ******** 2026-01-30 06:03:27.999883 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:03:27.999901 | orchestrator | 2026-01-30 06:03:27.999919 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:03:27.999953 | orchestrator | Friday 30 January 2026 06:03:27 +0000 (0:00:01.136) 0:15:21.600 ******** 2026-01-30 06:04:07.105410 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-30 06:04:07.105492 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-30 06:04:07.105499 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:04:07.105503 | orchestrator | 2026-01-30 06:04:07.105508 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:04:07.105513 | orchestrator | Friday 30 January 2026 06:03:30 +0000 (0:00:02.113) 0:15:23.713 ******** 2026-01-30 06:04:07.105534 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:04:07.105539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:04:07.105543 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:04:07.105547 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105551 | orchestrator | 2026-01-30 06:04:07.105556 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:04:07.105560 | orchestrator | Friday 30 January 2026 06:03:31 +0000 (0:00:01.170) 0:15:24.884 ******** 2026-01-30 06:04:07.105564 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105568 | orchestrator | 2026-01-30 06:04:07.105572 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:04:07.105576 | orchestrator | Friday 30 January 2026 06:03:32 +0000 (0:00:01.180) 0:15:26.065 ******** 2026-01-30 06:04:07.105580 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:04:07.105584 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:04:07.105588 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:04:07.105592 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:04:07.105596 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:04:07.105599 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:04:07.105603 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:04:07.105607 | orchestrator | 2026-01-30 06:04:07.105611 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:04:07.105614 | orchestrator | Friday 30 January 2026 06:03:34 +0000 (0:00:01.815) 0:15:27.880 ******** 2026-01-30 06:04:07.105618 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:04:07.105622 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:04:07.105626 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:04:07.105629 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:04:07.105633 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:04:07.105637 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:04:07.105641 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:04:07.105644 | orchestrator | 2026-01-30 06:04:07.105648 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-01-30 06:04:07.105652 | orchestrator | Friday 30 January 2026 06:03:36 +0000 (0:00:02.230) 0:15:30.111 ******** 2026-01-30 06:04:07.105655 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105659 | orchestrator | 2026-01-30 06:04:07.105663 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-01-30 06:04:07.105667 | orchestrator | Friday 30 January 2026 06:03:37 +0000 (0:00:00.874) 0:15:30.985 ******** 2026-01-30 06:04:07.105671 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105674 | orchestrator | 2026-01-30 06:04:07.105678 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-01-30 06:04:07.105682 | orchestrator | Friday 30 January 2026 06:03:38 +0000 (0:00:00.868) 0:15:31.854 ******** 2026-01-30 06:04:07.105686 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105690 | orchestrator | 2026-01-30 06:04:07.105694 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-01-30 06:04:07.105697 | orchestrator | Friday 30 January 2026 06:03:39 +0000 (0:00:00.817) 0:15:32.671 ******** 2026-01-30 06:04:07.105701 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105708 | orchestrator | 2026-01-30 06:04:07.105712 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-01-30 06:04:07.105725 | orchestrator | Friday 30 January 2026 06:03:39 +0000 (0:00:00.855) 0:15:33.527 ******** 2026-01-30 06:04:07.105729 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105733 | orchestrator | 2026-01-30 06:04:07.105737 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-01-30 06:04:07.105740 | orchestrator | Friday 30 January 2026 06:03:40 +0000 (0:00:00.758) 0:15:34.286 ******** 2026-01-30 06:04:07.105744 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:04:07.105748 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:04:07.105752 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:04:07.105755 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105759 | orchestrator | 2026-01-30 06:04:07.105763 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-01-30 06:04:07.105767 | orchestrator | Friday 30 January 2026 06:03:41 +0000 (0:00:01.080) 0:15:35.367 ******** 2026-01-30 06:04:07.105770 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-01-30 06:04:07.105774 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-01-30 06:04:07.105788 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-01-30 06:04:07.105792 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-01-30 06:04:07.105796 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-01-30 06:04:07.105799 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-01-30 06:04:07.105803 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105807 | orchestrator | 2026-01-30 06:04:07.105811 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-01-30 06:04:07.105815 | orchestrator | Friday 30 January 2026 06:03:43 +0000 (0:00:01.575) 0:15:36.942 ******** 2026-01-30 06:04:07.105819 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:04:07.105822 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:04:07.105826 | orchestrator | 2026-01-30 06:04:07.105830 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-01-30 06:04:07.105834 | orchestrator | Friday 30 January 2026 06:03:46 +0000 (0:00:03.421) 0:15:40.363 ******** 2026-01-30 06:04:07.105838 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:04:07.105841 | orchestrator | 2026-01-30 06:04:07.105845 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:04:07.105849 | orchestrator | Friday 30 January 2026 06:03:48 +0000 (0:00:02.103) 0:15:42.467 ******** 2026-01-30 06:04:07.105853 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-01-30 06:04:07.105858 | orchestrator | 2026-01-30 06:04:07.105861 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:04:07.105865 | orchestrator | Friday 30 January 2026 06:03:50 +0000 (0:00:01.237) 0:15:43.704 ******** 2026-01-30 06:04:07.105869 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-01-30 06:04:07.105873 | orchestrator | 2026-01-30 06:04:07.105877 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:04:07.105880 | orchestrator | Friday 30 January 2026 06:03:51 +0000 (0:00:01.147) 0:15:44.852 ******** 2026-01-30 06:04:07.105884 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:07.105888 | orchestrator | 2026-01-30 06:04:07.105892 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:04:07.105896 | orchestrator | Friday 30 January 2026 06:03:52 +0000 (0:00:01.557) 0:15:46.410 ******** 2026-01-30 06:04:07.105899 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105903 | orchestrator | 2026-01-30 06:04:07.105910 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:04:07.105914 | orchestrator | Friday 30 January 2026 06:03:53 +0000 (0:00:01.129) 0:15:47.539 ******** 2026-01-30 06:04:07.105918 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105922 | orchestrator | 2026-01-30 06:04:07.105926 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:04:07.105929 | orchestrator | Friday 30 January 2026 06:03:55 +0000 (0:00:01.116) 0:15:48.656 ******** 2026-01-30 06:04:07.105933 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105937 | orchestrator | 2026-01-30 06:04:07.105941 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:04:07.105945 | orchestrator | Friday 30 January 2026 06:03:56 +0000 (0:00:01.177) 0:15:49.833 ******** 2026-01-30 06:04:07.105948 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:07.105952 | orchestrator | 2026-01-30 06:04:07.105956 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:04:07.105960 | orchestrator | Friday 30 January 2026 06:03:57 +0000 (0:00:01.517) 0:15:51.350 ******** 2026-01-30 06:04:07.105965 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105969 | orchestrator | 2026-01-30 06:04:07.105975 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:04:07.105981 | orchestrator | Friday 30 January 2026 06:03:58 +0000 (0:00:01.128) 0:15:52.479 ******** 2026-01-30 06:04:07.105988 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.105994 | orchestrator | 2026-01-30 06:04:07.106000 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:04:07.106006 | orchestrator | Friday 30 January 2026 06:04:00 +0000 (0:00:01.174) 0:15:53.653 ******** 2026-01-30 06:04:07.106012 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:07.106097 | orchestrator | 2026-01-30 06:04:07.106105 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:04:07.106112 | orchestrator | Friday 30 January 2026 06:04:01 +0000 (0:00:01.571) 0:15:55.225 ******** 2026-01-30 06:04:07.106129 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:07.106135 | orchestrator | 2026-01-30 06:04:07.106148 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:04:07.106158 | orchestrator | Friday 30 January 2026 06:04:03 +0000 (0:00:01.485) 0:15:56.710 ******** 2026-01-30 06:04:07.106163 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.106167 | orchestrator | 2026-01-30 06:04:07.106172 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:04:07.106176 | orchestrator | Friday 30 January 2026 06:04:03 +0000 (0:00:00.787) 0:15:57.498 ******** 2026-01-30 06:04:07.106181 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:07.106185 | orchestrator | 2026-01-30 06:04:07.106189 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:04:07.106194 | orchestrator | Friday 30 January 2026 06:04:04 +0000 (0:00:00.782) 0:15:58.280 ******** 2026-01-30 06:04:07.106198 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.106203 | orchestrator | 2026-01-30 06:04:07.106207 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:04:07.106212 | orchestrator | Friday 30 January 2026 06:04:05 +0000 (0:00:00.840) 0:15:59.121 ******** 2026-01-30 06:04:07.106216 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:07.106220 | orchestrator | 2026-01-30 06:04:07.106225 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:04:07.106229 | orchestrator | Friday 30 January 2026 06:04:06 +0000 (0:00:00.774) 0:15:59.895 ******** 2026-01-30 06:04:07.106239 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862384 | orchestrator | 2026-01-30 06:04:46.862465 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:04:46.862472 | orchestrator | Friday 30 January 2026 06:04:07 +0000 (0:00:00.808) 0:16:00.703 ******** 2026-01-30 06:04:46.862476 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862481 | orchestrator | 2026-01-30 06:04:46.862499 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:04:46.862504 | orchestrator | Friday 30 January 2026 06:04:07 +0000 (0:00:00.768) 0:16:01.472 ******** 2026-01-30 06:04:46.862508 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862511 | orchestrator | 2026-01-30 06:04:46.862515 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:04:46.862519 | orchestrator | Friday 30 January 2026 06:04:08 +0000 (0:00:00.802) 0:16:02.275 ******** 2026-01-30 06:04:46.862523 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862528 | orchestrator | 2026-01-30 06:04:46.862532 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:04:46.862536 | orchestrator | Friday 30 January 2026 06:04:09 +0000 (0:00:00.792) 0:16:03.067 ******** 2026-01-30 06:04:46.862540 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862544 | orchestrator | 2026-01-30 06:04:46.862547 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:04:46.862551 | orchestrator | Friday 30 January 2026 06:04:10 +0000 (0:00:00.790) 0:16:03.858 ******** 2026-01-30 06:04:46.862555 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862558 | orchestrator | 2026-01-30 06:04:46.862562 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:04:46.862566 | orchestrator | Friday 30 January 2026 06:04:11 +0000 (0:00:00.796) 0:16:04.655 ******** 2026-01-30 06:04:46.862570 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862573 | orchestrator | 2026-01-30 06:04:46.862577 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:04:46.862581 | orchestrator | Friday 30 January 2026 06:04:11 +0000 (0:00:00.758) 0:16:05.414 ******** 2026-01-30 06:04:46.862585 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862588 | orchestrator | 2026-01-30 06:04:46.862592 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:04:46.862596 | orchestrator | Friday 30 January 2026 06:04:12 +0000 (0:00:00.772) 0:16:06.186 ******** 2026-01-30 06:04:46.862600 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862603 | orchestrator | 2026-01-30 06:04:46.862607 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:04:46.862611 | orchestrator | Friday 30 January 2026 06:04:13 +0000 (0:00:00.775) 0:16:06.962 ******** 2026-01-30 06:04:46.862614 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862618 | orchestrator | 2026-01-30 06:04:46.862622 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:04:46.862626 | orchestrator | Friday 30 January 2026 06:04:14 +0000 (0:00:00.780) 0:16:07.742 ******** 2026-01-30 06:04:46.862630 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862633 | orchestrator | 2026-01-30 06:04:46.862637 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:04:46.862641 | orchestrator | Friday 30 January 2026 06:04:14 +0000 (0:00:00.808) 0:16:08.551 ******** 2026-01-30 06:04:46.862645 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862648 | orchestrator | 2026-01-30 06:04:46.862652 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:04:46.862656 | orchestrator | Friday 30 January 2026 06:04:15 +0000 (0:00:00.782) 0:16:09.333 ******** 2026-01-30 06:04:46.862659 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862663 | orchestrator | 2026-01-30 06:04:46.862667 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:04:46.862671 | orchestrator | Friday 30 January 2026 06:04:16 +0000 (0:00:00.755) 0:16:10.088 ******** 2026-01-30 06:04:46.862675 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862679 | orchestrator | 2026-01-30 06:04:46.862683 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:04:46.862686 | orchestrator | Friday 30 January 2026 06:04:17 +0000 (0:00:00.769) 0:16:10.858 ******** 2026-01-30 06:04:46.862690 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862694 | orchestrator | 2026-01-30 06:04:46.862701 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:04:46.862705 | orchestrator | Friday 30 January 2026 06:04:17 +0000 (0:00:00.751) 0:16:11.610 ******** 2026-01-30 06:04:46.862708 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862712 | orchestrator | 2026-01-30 06:04:46.862716 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:04:46.862720 | orchestrator | Friday 30 January 2026 06:04:18 +0000 (0:00:00.766) 0:16:12.377 ******** 2026-01-30 06:04:46.862733 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862737 | orchestrator | 2026-01-30 06:04:46.862740 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:04:46.862744 | orchestrator | Friday 30 January 2026 06:04:19 +0000 (0:00:00.780) 0:16:13.157 ******** 2026-01-30 06:04:46.862748 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862751 | orchestrator | 2026-01-30 06:04:46.862756 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:04:46.862760 | orchestrator | Friday 30 January 2026 06:04:20 +0000 (0:00:00.764) 0:16:13.922 ******** 2026-01-30 06:04:46.862763 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862767 | orchestrator | 2026-01-30 06:04:46.862771 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:04:46.862774 | orchestrator | Friday 30 January 2026 06:04:21 +0000 (0:00:01.593) 0:16:15.515 ******** 2026-01-30 06:04:46.862778 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862782 | orchestrator | 2026-01-30 06:04:46.862786 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:04:46.862789 | orchestrator | Friday 30 January 2026 06:04:24 +0000 (0:00:02.110) 0:16:17.626 ******** 2026-01-30 06:04:46.862793 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-01-30 06:04:46.862798 | orchestrator | 2026-01-30 06:04:46.862811 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:04:46.862815 | orchestrator | Friday 30 January 2026 06:04:25 +0000 (0:00:01.181) 0:16:18.808 ******** 2026-01-30 06:04:46.862819 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862822 | orchestrator | 2026-01-30 06:04:46.862826 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:04:46.862830 | orchestrator | Friday 30 January 2026 06:04:26 +0000 (0:00:01.120) 0:16:19.928 ******** 2026-01-30 06:04:46.862834 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862837 | orchestrator | 2026-01-30 06:04:46.862841 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:04:46.862845 | orchestrator | Friday 30 January 2026 06:04:27 +0000 (0:00:01.110) 0:16:21.039 ******** 2026-01-30 06:04:46.862848 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:04:46.862852 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:04:46.862856 | orchestrator | 2026-01-30 06:04:46.862860 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:04:46.862864 | orchestrator | Friday 30 January 2026 06:04:29 +0000 (0:00:01.847) 0:16:22.887 ******** 2026-01-30 06:04:46.862868 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862872 | orchestrator | 2026-01-30 06:04:46.862875 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:04:46.862879 | orchestrator | Friday 30 January 2026 06:04:30 +0000 (0:00:01.451) 0:16:24.338 ******** 2026-01-30 06:04:46.862883 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862887 | orchestrator | 2026-01-30 06:04:46.862890 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:04:46.862894 | orchestrator | Friday 30 January 2026 06:04:31 +0000 (0:00:01.140) 0:16:25.479 ******** 2026-01-30 06:04:46.862898 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862901 | orchestrator | 2026-01-30 06:04:46.862905 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:04:46.862912 | orchestrator | Friday 30 January 2026 06:04:32 +0000 (0:00:00.752) 0:16:26.231 ******** 2026-01-30 06:04:46.862916 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862920 | orchestrator | 2026-01-30 06:04:46.862924 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:04:46.862927 | orchestrator | Friday 30 January 2026 06:04:33 +0000 (0:00:00.762) 0:16:26.994 ******** 2026-01-30 06:04:46.862931 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-01-30 06:04:46.862935 | orchestrator | 2026-01-30 06:04:46.862939 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:04:46.862942 | orchestrator | Friday 30 January 2026 06:04:34 +0000 (0:00:01.096) 0:16:28.090 ******** 2026-01-30 06:04:46.862946 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.862950 | orchestrator | 2026-01-30 06:04:46.862955 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:04:46.862959 | orchestrator | Friday 30 January 2026 06:04:36 +0000 (0:00:01.758) 0:16:29.849 ******** 2026-01-30 06:04:46.862964 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:04:46.862969 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:04:46.862973 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:04:46.862978 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.862982 | orchestrator | 2026-01-30 06:04:46.862987 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:04:46.862991 | orchestrator | Friday 30 January 2026 06:04:37 +0000 (0:00:01.164) 0:16:31.014 ******** 2026-01-30 06:04:46.862995 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.863120 | orchestrator | 2026-01-30 06:04:46.863126 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:04:46.863130 | orchestrator | Friday 30 January 2026 06:04:38 +0000 (0:00:01.108) 0:16:32.123 ******** 2026-01-30 06:04:46.863135 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.863139 | orchestrator | 2026-01-30 06:04:46.863144 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:04:46.863148 | orchestrator | Friday 30 January 2026 06:04:39 +0000 (0:00:01.158) 0:16:33.281 ******** 2026-01-30 06:04:46.863152 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.863157 | orchestrator | 2026-01-30 06:04:46.863161 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:04:46.863165 | orchestrator | Friday 30 January 2026 06:04:40 +0000 (0:00:01.151) 0:16:34.433 ******** 2026-01-30 06:04:46.863174 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.863178 | orchestrator | 2026-01-30 06:04:46.863183 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:04:46.863188 | orchestrator | Friday 30 January 2026 06:04:41 +0000 (0:00:01.132) 0:16:35.566 ******** 2026-01-30 06:04:46.863192 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:04:46.863196 | orchestrator | 2026-01-30 06:04:46.863201 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:04:46.863205 | orchestrator | Friday 30 January 2026 06:04:42 +0000 (0:00:00.786) 0:16:36.352 ******** 2026-01-30 06:04:46.863209 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.863213 | orchestrator | 2026-01-30 06:04:46.863217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:04:46.863220 | orchestrator | Friday 30 January 2026 06:04:44 +0000 (0:00:02.164) 0:16:38.517 ******** 2026-01-30 06:04:46.863224 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:04:46.863228 | orchestrator | 2026-01-30 06:04:46.863231 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:04:46.863235 | orchestrator | Friday 30 January 2026 06:04:45 +0000 (0:00:00.793) 0:16:39.310 ******** 2026-01-30 06:04:46.863239 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-01-30 06:04:46.863247 | orchestrator | 2026-01-30 06:04:46.863255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:05:22.987020 | orchestrator | Friday 30 January 2026 06:04:46 +0000 (0:00:01.150) 0:16:40.461 ******** 2026-01-30 06:05:22.987200 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987229 | orchestrator | 2026-01-30 06:05:22.987251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:05:22.987272 | orchestrator | Friday 30 January 2026 06:04:47 +0000 (0:00:01.112) 0:16:41.573 ******** 2026-01-30 06:05:22.987293 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987314 | orchestrator | 2026-01-30 06:05:22.987327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:05:22.987338 | orchestrator | Friday 30 January 2026 06:04:49 +0000 (0:00:01.165) 0:16:42.739 ******** 2026-01-30 06:05:22.987349 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987360 | orchestrator | 2026-01-30 06:05:22.987371 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:05:22.987382 | orchestrator | Friday 30 January 2026 06:04:50 +0000 (0:00:01.191) 0:16:43.930 ******** 2026-01-30 06:05:22.987394 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987404 | orchestrator | 2026-01-30 06:05:22.987415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:05:22.987426 | orchestrator | Friday 30 January 2026 06:04:51 +0000 (0:00:01.151) 0:16:45.082 ******** 2026-01-30 06:05:22.987437 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987448 | orchestrator | 2026-01-30 06:05:22.987459 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:05:22.987470 | orchestrator | Friday 30 January 2026 06:04:52 +0000 (0:00:01.138) 0:16:46.220 ******** 2026-01-30 06:05:22.987480 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987491 | orchestrator | 2026-01-30 06:05:22.987502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:05:22.987513 | orchestrator | Friday 30 January 2026 06:04:53 +0000 (0:00:01.145) 0:16:47.366 ******** 2026-01-30 06:05:22.987524 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987538 | orchestrator | 2026-01-30 06:05:22.987551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:05:22.987563 | orchestrator | Friday 30 January 2026 06:04:54 +0000 (0:00:01.139) 0:16:48.505 ******** 2026-01-30 06:05:22.987576 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.987588 | orchestrator | 2026-01-30 06:05:22.987601 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:05:22.987614 | orchestrator | Friday 30 January 2026 06:04:56 +0000 (0:00:01.152) 0:16:49.658 ******** 2026-01-30 06:05:22.987628 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:05:22.987642 | orchestrator | 2026-01-30 06:05:22.987654 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:05:22.987667 | orchestrator | Friday 30 January 2026 06:04:56 +0000 (0:00:00.775) 0:16:50.433 ******** 2026-01-30 06:05:22.987680 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-01-30 06:05:22.987708 | orchestrator | 2026-01-30 06:05:22.987721 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:05:22.987734 | orchestrator | Friday 30 January 2026 06:04:57 +0000 (0:00:01.043) 0:16:51.477 ******** 2026-01-30 06:05:22.987747 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-01-30 06:05:22.987760 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-30 06:05:22.987773 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-30 06:05:22.987785 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-30 06:05:22.987797 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-30 06:05:22.987810 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-30 06:05:22.987823 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-30 06:05:22.987876 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:05:22.987890 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:05:22.987903 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:05:22.987916 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:05:22.987928 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:05:22.987939 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:05:22.987950 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:05:22.987961 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-01-30 06:05:22.987986 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-01-30 06:05:22.987997 | orchestrator | 2026-01-30 06:05:22.988008 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:05:22.988019 | orchestrator | Friday 30 January 2026 06:05:04 +0000 (0:00:06.556) 0:16:58.034 ******** 2026-01-30 06:05:22.988030 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988041 | orchestrator | 2026-01-30 06:05:22.988116 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:05:22.988135 | orchestrator | Friday 30 January 2026 06:05:05 +0000 (0:00:00.735) 0:16:58.769 ******** 2026-01-30 06:05:22.988153 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988164 | orchestrator | 2026-01-30 06:05:22.988175 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:05:22.988186 | orchestrator | Friday 30 January 2026 06:05:05 +0000 (0:00:00.774) 0:16:59.544 ******** 2026-01-30 06:05:22.988196 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988207 | orchestrator | 2026-01-30 06:05:22.988218 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:05:22.988229 | orchestrator | Friday 30 January 2026 06:05:06 +0000 (0:00:00.766) 0:17:00.310 ******** 2026-01-30 06:05:22.988240 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988251 | orchestrator | 2026-01-30 06:05:22.988261 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:05:22.988291 | orchestrator | Friday 30 January 2026 06:05:07 +0000 (0:00:00.776) 0:17:01.087 ******** 2026-01-30 06:05:22.988303 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988314 | orchestrator | 2026-01-30 06:05:22.988324 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:05:22.988337 | orchestrator | Friday 30 January 2026 06:05:08 +0000 (0:00:00.766) 0:17:01.854 ******** 2026-01-30 06:05:22.988347 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988358 | orchestrator | 2026-01-30 06:05:22.988369 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:05:22.988379 | orchestrator | Friday 30 January 2026 06:05:09 +0000 (0:00:00.812) 0:17:02.666 ******** 2026-01-30 06:05:22.988390 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988401 | orchestrator | 2026-01-30 06:05:22.988412 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:05:22.988423 | orchestrator | Friday 30 January 2026 06:05:09 +0000 (0:00:00.789) 0:17:03.455 ******** 2026-01-30 06:05:22.988433 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988444 | orchestrator | 2026-01-30 06:05:22.988455 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:05:22.988466 | orchestrator | Friday 30 January 2026 06:05:10 +0000 (0:00:00.796) 0:17:04.252 ******** 2026-01-30 06:05:22.988476 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988487 | orchestrator | 2026-01-30 06:05:22.988498 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:05:22.988508 | orchestrator | Friday 30 January 2026 06:05:11 +0000 (0:00:00.737) 0:17:04.989 ******** 2026-01-30 06:05:22.988530 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988541 | orchestrator | 2026-01-30 06:05:22.988552 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:05:22.988563 | orchestrator | Friday 30 January 2026 06:05:12 +0000 (0:00:00.762) 0:17:05.752 ******** 2026-01-30 06:05:22.988574 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988584 | orchestrator | 2026-01-30 06:05:22.988595 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:05:22.988606 | orchestrator | Friday 30 January 2026 06:05:12 +0000 (0:00:00.751) 0:17:06.503 ******** 2026-01-30 06:05:22.988617 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988628 | orchestrator | 2026-01-30 06:05:22.988638 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:05:22.988649 | orchestrator | Friday 30 January 2026 06:05:13 +0000 (0:00:00.758) 0:17:07.262 ******** 2026-01-30 06:05:22.988660 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988671 | orchestrator | 2026-01-30 06:05:22.988682 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:05:22.988693 | orchestrator | Friday 30 January 2026 06:05:14 +0000 (0:00:00.852) 0:17:08.115 ******** 2026-01-30 06:05:22.988703 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988714 | orchestrator | 2026-01-30 06:05:22.988725 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:05:22.988736 | orchestrator | Friday 30 January 2026 06:05:15 +0000 (0:00:00.774) 0:17:08.889 ******** 2026-01-30 06:05:22.988767 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988778 | orchestrator | 2026-01-30 06:05:22.988789 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:05:22.988800 | orchestrator | Friday 30 January 2026 06:05:16 +0000 (0:00:00.867) 0:17:09.757 ******** 2026-01-30 06:05:22.988811 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988822 | orchestrator | 2026-01-30 06:05:22.988833 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:05:22.988844 | orchestrator | Friday 30 January 2026 06:05:16 +0000 (0:00:00.765) 0:17:10.522 ******** 2026-01-30 06:05:22.988855 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988866 | orchestrator | 2026-01-30 06:05:22.988877 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:05:22.988889 | orchestrator | Friday 30 January 2026 06:05:17 +0000 (0:00:00.777) 0:17:11.300 ******** 2026-01-30 06:05:22.988900 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988911 | orchestrator | 2026-01-30 06:05:22.988922 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:05:22.988933 | orchestrator | Friday 30 January 2026 06:05:18 +0000 (0:00:00.779) 0:17:12.079 ******** 2026-01-30 06:05:22.988944 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.988954 | orchestrator | 2026-01-30 06:05:22.988965 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:05:22.988982 | orchestrator | Friday 30 January 2026 06:05:19 +0000 (0:00:00.802) 0:17:12.881 ******** 2026-01-30 06:05:22.988993 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.989004 | orchestrator | 2026-01-30 06:05:22.989015 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:05:22.989026 | orchestrator | Friday 30 January 2026 06:05:20 +0000 (0:00:00.811) 0:17:13.692 ******** 2026-01-30 06:05:22.989037 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.989086 | orchestrator | 2026-01-30 06:05:22.989098 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:05:22.989109 | orchestrator | Friday 30 January 2026 06:05:20 +0000 (0:00:00.793) 0:17:14.486 ******** 2026-01-30 06:05:22.989120 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:05:22.989131 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:05:22.989141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:05:22.989175 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:05:22.989186 | orchestrator | 2026-01-30 06:05:22.989197 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:05:22.989208 | orchestrator | Friday 30 January 2026 06:05:21 +0000 (0:00:01.043) 0:17:15.530 ******** 2026-01-30 06:05:22.989219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:05:22.989237 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:06:49.590109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:06:49.590257 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.590286 | orchestrator | 2026-01-30 06:06:49.590308 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:06:49.590330 | orchestrator | Friday 30 January 2026 06:05:22 +0000 (0:00:01.056) 0:17:16.587 ******** 2026-01-30 06:06:49.590349 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:06:49.590367 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:06:49.590386 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:06:49.590405 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.590425 | orchestrator | 2026-01-30 06:06:49.590448 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:06:49.590466 | orchestrator | Friday 30 January 2026 06:05:24 +0000 (0:00:01.092) 0:17:17.679 ******** 2026-01-30 06:06:49.590484 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.590503 | orchestrator | 2026-01-30 06:06:49.590522 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:06:49.590543 | orchestrator | Friday 30 January 2026 06:05:24 +0000 (0:00:00.737) 0:17:18.417 ******** 2026-01-30 06:06:49.590565 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-30 06:06:49.590585 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.590606 | orchestrator | 2026-01-30 06:06:49.590624 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:06:49.590645 | orchestrator | Friday 30 January 2026 06:05:25 +0000 (0:00:00.908) 0:17:19.326 ******** 2026-01-30 06:06:49.590666 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.590687 | orchestrator | 2026-01-30 06:06:49.590709 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:06:49.590730 | orchestrator | Friday 30 January 2026 06:05:27 +0000 (0:00:01.422) 0:17:20.749 ******** 2026-01-30 06:06:49.590750 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.590770 | orchestrator | 2026-01-30 06:06:49.590792 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-30 06:06:49.590812 | orchestrator | Friday 30 January 2026 06:05:27 +0000 (0:00:00.786) 0:17:21.536 ******** 2026-01-30 06:06:49.590834 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-01-30 06:06:49.590855 | orchestrator | 2026-01-30 06:06:49.590873 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-30 06:06:49.590892 | orchestrator | Friday 30 January 2026 06:05:29 +0000 (0:00:01.275) 0:17:22.811 ******** 2026-01-30 06:06:49.590912 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.590931 | orchestrator | 2026-01-30 06:06:49.590951 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-30 06:06:49.590972 | orchestrator | Friday 30 January 2026 06:05:32 +0000 (0:00:03.288) 0:17:26.100 ******** 2026-01-30 06:06:49.591016 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.591038 | orchestrator | 2026-01-30 06:06:49.591055 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-30 06:06:49.591073 | orchestrator | Friday 30 January 2026 06:05:33 +0000 (0:00:01.149) 0:17:27.249 ******** 2026-01-30 06:06:49.591091 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.591110 | orchestrator | 2026-01-30 06:06:49.591129 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-30 06:06:49.591184 | orchestrator | Friday 30 January 2026 06:05:34 +0000 (0:00:01.127) 0:17:28.377 ******** 2026-01-30 06:06:49.591204 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.591223 | orchestrator | 2026-01-30 06:06:49.591241 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-30 06:06:49.591258 | orchestrator | Friday 30 January 2026 06:05:35 +0000 (0:00:01.080) 0:17:29.457 ******** 2026-01-30 06:06:49.591276 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:06:49.591293 | orchestrator | 2026-01-30 06:06:49.591310 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-30 06:06:49.591329 | orchestrator | Friday 30 January 2026 06:05:37 +0000 (0:00:01.960) 0:17:31.417 ******** 2026-01-30 06:06:49.591347 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.591366 | orchestrator | 2026-01-30 06:06:49.591383 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-30 06:06:49.591401 | orchestrator | Friday 30 January 2026 06:05:39 +0000 (0:00:01.557) 0:17:32.974 ******** 2026-01-30 06:06:49.591418 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.591436 | orchestrator | 2026-01-30 06:06:49.591454 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-30 06:06:49.591491 | orchestrator | Friday 30 January 2026 06:05:40 +0000 (0:00:01.453) 0:17:34.428 ******** 2026-01-30 06:06:49.591510 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.591527 | orchestrator | 2026-01-30 06:06:49.591544 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-30 06:06:49.591563 | orchestrator | Friday 30 January 2026 06:05:42 +0000 (0:00:01.443) 0:17:35.872 ******** 2026-01-30 06:06:49.591581 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:06:49.591599 | orchestrator | 2026-01-30 06:06:49.591615 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-30 06:06:49.591632 | orchestrator | Friday 30 January 2026 06:05:43 +0000 (0:00:01.573) 0:17:37.446 ******** 2026-01-30 06:06:49.591649 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:06:49.591667 | orchestrator | 2026-01-30 06:06:49.591685 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-30 06:06:49.591703 | orchestrator | Friday 30 January 2026 06:05:45 +0000 (0:00:01.587) 0:17:39.034 ******** 2026-01-30 06:06:49.591721 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:06:49.591740 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-30 06:06:49.591758 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-30 06:06:49.591776 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-30 06:06:49.591794 | orchestrator | 2026-01-30 06:06:49.591878 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-30 06:06:49.591900 | orchestrator | Friday 30 January 2026 06:05:49 +0000 (0:00:04.320) 0:17:43.354 ******** 2026-01-30 06:06:49.591918 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:06:49.591935 | orchestrator | 2026-01-30 06:06:49.591953 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-30 06:06:49.591969 | orchestrator | Friday 30 January 2026 06:05:51 +0000 (0:00:01.923) 0:17:45.278 ******** 2026-01-30 06:06:49.591986 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592063 | orchestrator | 2026-01-30 06:06:49.592081 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-30 06:06:49.592098 | orchestrator | Friday 30 January 2026 06:05:52 +0000 (0:00:01.089) 0:17:46.367 ******** 2026-01-30 06:06:49.592116 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592134 | orchestrator | 2026-01-30 06:06:49.592151 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-30 06:06:49.592170 | orchestrator | Friday 30 January 2026 06:05:53 +0000 (0:00:01.092) 0:17:47.460 ******** 2026-01-30 06:06:49.592187 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592205 | orchestrator | 2026-01-30 06:06:49.592223 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-30 06:06:49.592258 | orchestrator | Friday 30 January 2026 06:05:55 +0000 (0:00:01.695) 0:17:49.155 ******** 2026-01-30 06:06:49.592276 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592292 | orchestrator | 2026-01-30 06:06:49.592310 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-30 06:06:49.592326 | orchestrator | Friday 30 January 2026 06:05:56 +0000 (0:00:01.426) 0:17:50.582 ******** 2026-01-30 06:06:49.592342 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.592359 | orchestrator | 2026-01-30 06:06:49.592376 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-30 06:06:49.592390 | orchestrator | Friday 30 January 2026 06:05:57 +0000 (0:00:00.758) 0:17:51.341 ******** 2026-01-30 06:06:49.592404 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-01-30 06:06:49.592421 | orchestrator | 2026-01-30 06:06:49.592437 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-30 06:06:49.592453 | orchestrator | Friday 30 January 2026 06:05:58 +0000 (0:00:01.086) 0:17:52.427 ******** 2026-01-30 06:06:49.592469 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.592486 | orchestrator | 2026-01-30 06:06:49.592502 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-30 06:06:49.592518 | orchestrator | Friday 30 January 2026 06:05:59 +0000 (0:00:01.118) 0:17:53.546 ******** 2026-01-30 06:06:49.592534 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.592550 | orchestrator | 2026-01-30 06:06:49.592566 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-30 06:06:49.592583 | orchestrator | Friday 30 January 2026 06:06:01 +0000 (0:00:01.087) 0:17:54.633 ******** 2026-01-30 06:06:49.592600 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-01-30 06:06:49.592615 | orchestrator | 2026-01-30 06:06:49.592631 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-30 06:06:49.592646 | orchestrator | Friday 30 January 2026 06:06:02 +0000 (0:00:01.122) 0:17:55.756 ******** 2026-01-30 06:06:49.592663 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592679 | orchestrator | 2026-01-30 06:06:49.592696 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-30 06:06:49.592712 | orchestrator | Friday 30 January 2026 06:06:04 +0000 (0:00:02.604) 0:17:58.360 ******** 2026-01-30 06:06:49.592728 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592743 | orchestrator | 2026-01-30 06:06:49.592759 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-30 06:06:49.592774 | orchestrator | Friday 30 January 2026 06:06:06 +0000 (0:00:01.985) 0:18:00.346 ******** 2026-01-30 06:06:49.592791 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.592807 | orchestrator | 2026-01-30 06:06:49.592823 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-30 06:06:49.592840 | orchestrator | Friday 30 January 2026 06:06:09 +0000 (0:00:02.401) 0:18:02.747 ******** 2026-01-30 06:06:49.592856 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:06:49.592872 | orchestrator | 2026-01-30 06:06:49.592887 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-30 06:06:49.592902 | orchestrator | Friday 30 January 2026 06:06:12 +0000 (0:00:02.924) 0:18:05.672 ******** 2026-01-30 06:06:49.592917 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-01-30 06:06:49.592933 | orchestrator | 2026-01-30 06:06:49.592961 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-30 06:06:49.592978 | orchestrator | Friday 30 January 2026 06:06:13 +0000 (0:00:01.107) 0:18:06.779 ******** 2026-01-30 06:06:49.593051 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-30 06:06:49.593072 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.593089 | orchestrator | 2026-01-30 06:06:49.593104 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-30 06:06:49.593136 | orchestrator | Friday 30 January 2026 06:06:36 +0000 (0:00:22.964) 0:18:29.744 ******** 2026-01-30 06:06:49.593151 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:06:49.593166 | orchestrator | 2026-01-30 06:06:49.593182 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-30 06:06:49.593197 | orchestrator | Friday 30 January 2026 06:06:39 +0000 (0:00:02.883) 0:18:32.627 ******** 2026-01-30 06:06:49.593213 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:06:49.593230 | orchestrator | 2026-01-30 06:06:49.593246 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-30 06:06:49.593261 | orchestrator | Friday 30 January 2026 06:06:39 +0000 (0:00:00.783) 0:18:33.411 ******** 2026-01-30 06:06:49.593301 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-30 06:07:24.777776 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-30 06:07:24.777872 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-30 06:07:24.777882 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-30 06:07:24.777891 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-30 06:07:24.777899 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__4950f7c3aaab8f8776675897e5887a2ab4608774'}])  2026-01-30 06:07:24.777907 | orchestrator | 2026-01-30 06:07:24.777916 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-01-30 06:07:24.777923 | orchestrator | Friday 30 January 2026 06:06:49 +0000 (0:00:09.775) 0:18:43.186 ******** 2026-01-30 06:07:24.777929 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:07:24.777936 | orchestrator | 2026-01-30 06:07:24.777943 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:07:24.777949 | orchestrator | Friday 30 January 2026 06:06:51 +0000 (0:00:02.167) 0:18:45.353 ******** 2026-01-30 06:07:24.777955 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:07:24.777986 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-01-30 06:07:24.777994 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-01-30 06:07:24.778058 | orchestrator | 2026-01-30 06:07:24.778066 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:07:24.778073 | orchestrator | Friday 30 January 2026 06:06:53 +0000 (0:00:01.799) 0:18:47.153 ******** 2026-01-30 06:07:24.778079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:07:24.778086 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:07:24.778107 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:07:24.778118 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:07:24.778127 | orchestrator | 2026-01-30 06:07:24.778138 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-01-30 06:07:24.778149 | orchestrator | Friday 30 January 2026 06:06:54 +0000 (0:00:01.033) 0:18:48.187 ******** 2026-01-30 06:07:24.778159 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:07:24.778169 | orchestrator | 2026-01-30 06:07:24.778180 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-01-30 06:07:24.778190 | orchestrator | Friday 30 January 2026 06:06:55 +0000 (0:00:00.783) 0:18:48.970 ******** 2026-01-30 06:07:24.778200 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:07:24.778211 | orchestrator | 2026-01-30 06:07:24.778221 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-01-30 06:07:24.778232 | orchestrator | 2026-01-30 06:07:24.778241 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-01-30 06:07:24.778251 | orchestrator | Friday 30 January 2026 06:06:58 +0000 (0:00:03.268) 0:18:52.239 ******** 2026-01-30 06:07:24.778262 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:07:24.778273 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:07:24.778285 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:07:24.778296 | orchestrator | 2026-01-30 06:07:24.778305 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-01-30 06:07:24.778312 | orchestrator | 2026-01-30 06:07:24.778318 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:07:24.778324 | orchestrator | Friday 30 January 2026 06:07:00 +0000 (0:00:01.726) 0:18:53.965 ******** 2026-01-30 06:07:24.778330 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778336 | orchestrator | 2026-01-30 06:07:24.778342 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:07:24.778363 | orchestrator | Friday 30 January 2026 06:07:01 +0000 (0:00:01.140) 0:18:55.106 ******** 2026-01-30 06:07:24.778370 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778376 | orchestrator | 2026-01-30 06:07:24.778382 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:07:24.778388 | orchestrator | Friday 30 January 2026 06:07:02 +0000 (0:00:01.125) 0:18:56.231 ******** 2026-01-30 06:07:24.778395 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778401 | orchestrator | 2026-01-30 06:07:24.778407 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:07:24.778413 | orchestrator | Friday 30 January 2026 06:07:03 +0000 (0:00:01.126) 0:18:57.358 ******** 2026-01-30 06:07:24.778419 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778425 | orchestrator | 2026-01-30 06:07:24.778431 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:07:24.778438 | orchestrator | Friday 30 January 2026 06:07:04 +0000 (0:00:01.160) 0:18:58.519 ******** 2026-01-30 06:07:24.778444 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778450 | orchestrator | 2026-01-30 06:07:24.778456 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:07:24.778462 | orchestrator | Friday 30 January 2026 06:07:06 +0000 (0:00:01.169) 0:18:59.689 ******** 2026-01-30 06:07:24.778468 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778474 | orchestrator | 2026-01-30 06:07:24.778480 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:07:24.778487 | orchestrator | Friday 30 January 2026 06:07:07 +0000 (0:00:01.146) 0:19:00.835 ******** 2026-01-30 06:07:24.778500 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778507 | orchestrator | 2026-01-30 06:07:24.778513 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:07:24.778519 | orchestrator | Friday 30 January 2026 06:07:08 +0000 (0:00:01.141) 0:19:01.976 ******** 2026-01-30 06:07:24.778525 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778531 | orchestrator | 2026-01-30 06:07:24.778537 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:07:24.778543 | orchestrator | Friday 30 January 2026 06:07:09 +0000 (0:00:01.145) 0:19:03.122 ******** 2026-01-30 06:07:24.778549 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778556 | orchestrator | 2026-01-30 06:07:24.778562 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:07:24.778568 | orchestrator | Friday 30 January 2026 06:07:10 +0000 (0:00:01.141) 0:19:04.263 ******** 2026-01-30 06:07:24.778574 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778580 | orchestrator | 2026-01-30 06:07:24.778586 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:07:24.778593 | orchestrator | Friday 30 January 2026 06:07:11 +0000 (0:00:01.113) 0:19:05.377 ******** 2026-01-30 06:07:24.778599 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778605 | orchestrator | 2026-01-30 06:07:24.778611 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:07:24.778617 | orchestrator | Friday 30 January 2026 06:07:12 +0000 (0:00:01.107) 0:19:06.485 ******** 2026-01-30 06:07:24.778623 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778629 | orchestrator | 2026-01-30 06:07:24.778635 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:07:24.778641 | orchestrator | Friday 30 January 2026 06:07:13 +0000 (0:00:01.101) 0:19:07.587 ******** 2026-01-30 06:07:24.778647 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778653 | orchestrator | 2026-01-30 06:07:24.778660 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:07:24.778666 | orchestrator | Friday 30 January 2026 06:07:14 +0000 (0:00:00.936) 0:19:08.524 ******** 2026-01-30 06:07:24.778672 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778678 | orchestrator | 2026-01-30 06:07:24.778684 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:07:24.778691 | orchestrator | Friday 30 January 2026 06:07:15 +0000 (0:00:00.951) 0:19:09.476 ******** 2026-01-30 06:07:24.778701 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778712 | orchestrator | 2026-01-30 06:07:24.778723 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:07:24.778733 | orchestrator | Friday 30 January 2026 06:07:16 +0000 (0:00:00.999) 0:19:10.476 ******** 2026-01-30 06:07:24.778743 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778754 | orchestrator | 2026-01-30 06:07:24.778767 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:07:24.778775 | orchestrator | Friday 30 January 2026 06:07:17 +0000 (0:00:01.122) 0:19:11.598 ******** 2026-01-30 06:07:24.778785 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778794 | orchestrator | 2026-01-30 06:07:24.778803 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:07:24.778813 | orchestrator | Friday 30 January 2026 06:07:19 +0000 (0:00:01.128) 0:19:12.727 ******** 2026-01-30 06:07:24.778823 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778834 | orchestrator | 2026-01-30 06:07:24.778845 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:07:24.778855 | orchestrator | Friday 30 January 2026 06:07:20 +0000 (0:00:01.161) 0:19:13.888 ******** 2026-01-30 06:07:24.778866 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778876 | orchestrator | 2026-01-30 06:07:24.778887 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:07:24.778897 | orchestrator | Friday 30 January 2026 06:07:21 +0000 (0:00:01.119) 0:19:15.007 ******** 2026-01-30 06:07:24.778912 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.778921 | orchestrator | 2026-01-30 06:07:24.778930 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:07:24.778941 | orchestrator | Friday 30 January 2026 06:07:22 +0000 (0:00:01.101) 0:19:16.108 ******** 2026-01-30 06:07:24.778952 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:07:24.779044 | orchestrator | 2026-01-30 06:07:24.779058 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:07:24.779069 | orchestrator | Friday 30 January 2026 06:07:23 +0000 (0:00:01.110) 0:19:17.219 ******** 2026-01-30 06:07:24.779089 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651225 | orchestrator | 2026-01-30 06:08:05.651366 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:08:05.651386 | orchestrator | Friday 30 January 2026 06:07:24 +0000 (0:00:01.157) 0:19:18.376 ******** 2026-01-30 06:08:05.651398 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651411 | orchestrator | 2026-01-30 06:08:05.651422 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:08:05.651434 | orchestrator | Friday 30 January 2026 06:07:25 +0000 (0:00:01.126) 0:19:19.503 ******** 2026-01-30 06:08:05.651445 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651455 | orchestrator | 2026-01-30 06:08:05.651466 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:08:05.651477 | orchestrator | Friday 30 January 2026 06:07:27 +0000 (0:00:01.116) 0:19:20.620 ******** 2026-01-30 06:08:05.651488 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651499 | orchestrator | 2026-01-30 06:08:05.651510 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:08:05.651521 | orchestrator | Friday 30 January 2026 06:07:28 +0000 (0:00:01.115) 0:19:21.735 ******** 2026-01-30 06:08:05.651532 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651542 | orchestrator | 2026-01-30 06:08:05.651553 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:08:05.651564 | orchestrator | Friday 30 January 2026 06:07:29 +0000 (0:00:01.103) 0:19:22.838 ******** 2026-01-30 06:08:05.651575 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651586 | orchestrator | 2026-01-30 06:08:05.651596 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:08:05.651607 | orchestrator | Friday 30 January 2026 06:07:30 +0000 (0:00:01.116) 0:19:23.955 ******** 2026-01-30 06:08:05.651618 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651629 | orchestrator | 2026-01-30 06:08:05.651640 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:08:05.651651 | orchestrator | Friday 30 January 2026 06:07:31 +0000 (0:00:01.104) 0:19:25.059 ******** 2026-01-30 06:08:05.651661 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651672 | orchestrator | 2026-01-30 06:08:05.651683 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:08:05.651694 | orchestrator | Friday 30 January 2026 06:07:32 +0000 (0:00:01.123) 0:19:26.183 ******** 2026-01-30 06:08:05.651705 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651718 | orchestrator | 2026-01-30 06:08:05.651737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:08:05.651757 | orchestrator | Friday 30 January 2026 06:07:33 +0000 (0:00:01.117) 0:19:27.300 ******** 2026-01-30 06:08:05.651776 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651793 | orchestrator | 2026-01-30 06:08:05.651813 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:08:05.651832 | orchestrator | Friday 30 January 2026 06:07:34 +0000 (0:00:01.111) 0:19:28.412 ******** 2026-01-30 06:08:05.651846 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651857 | orchestrator | 2026-01-30 06:08:05.651868 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:08:05.651907 | orchestrator | Friday 30 January 2026 06:07:35 +0000 (0:00:01.127) 0:19:29.539 ******** 2026-01-30 06:08:05.651919 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.651929 | orchestrator | 2026-01-30 06:08:05.651976 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:08:05.651987 | orchestrator | Friday 30 January 2026 06:07:37 +0000 (0:00:01.112) 0:19:30.652 ******** 2026-01-30 06:08:05.651998 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652008 | orchestrator | 2026-01-30 06:08:05.652019 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:08:05.652030 | orchestrator | Friday 30 January 2026 06:07:38 +0000 (0:00:01.098) 0:19:31.750 ******** 2026-01-30 06:08:05.652040 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652051 | orchestrator | 2026-01-30 06:08:05.652062 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:08:05.652072 | orchestrator | Friday 30 January 2026 06:07:39 +0000 (0:00:00.931) 0:19:32.682 ******** 2026-01-30 06:08:05.652083 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652094 | orchestrator | 2026-01-30 06:08:05.652119 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:08:05.652130 | orchestrator | Friday 30 January 2026 06:07:39 +0000 (0:00:00.902) 0:19:33.585 ******** 2026-01-30 06:08:05.652141 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652152 | orchestrator | 2026-01-30 06:08:05.652163 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:08:05.652173 | orchestrator | Friday 30 January 2026 06:07:41 +0000 (0:00:01.084) 0:19:34.669 ******** 2026-01-30 06:08:05.652184 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652195 | orchestrator | 2026-01-30 06:08:05.652206 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:08:05.652217 | orchestrator | Friday 30 January 2026 06:07:42 +0000 (0:00:01.067) 0:19:35.736 ******** 2026-01-30 06:08:05.652233 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652252 | orchestrator | 2026-01-30 06:08:05.652273 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:08:05.652295 | orchestrator | Friday 30 January 2026 06:07:43 +0000 (0:00:01.082) 0:19:36.819 ******** 2026-01-30 06:08:05.652315 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652329 | orchestrator | 2026-01-30 06:08:05.652340 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:08:05.652350 | orchestrator | Friday 30 January 2026 06:07:44 +0000 (0:00:01.099) 0:19:37.919 ******** 2026-01-30 06:08:05.652361 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652372 | orchestrator | 2026-01-30 06:08:05.652382 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:08:05.652393 | orchestrator | Friday 30 January 2026 06:07:45 +0000 (0:00:01.163) 0:19:39.082 ******** 2026-01-30 06:08:05.652498 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652513 | orchestrator | 2026-01-30 06:08:05.652525 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:08:05.652536 | orchestrator | Friday 30 January 2026 06:07:46 +0000 (0:00:01.123) 0:19:40.206 ******** 2026-01-30 06:08:05.652547 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652566 | orchestrator | 2026-01-30 06:08:05.652583 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:08:05.652594 | orchestrator | Friday 30 January 2026 06:07:47 +0000 (0:00:01.105) 0:19:41.312 ******** 2026-01-30 06:08:05.652605 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652615 | orchestrator | 2026-01-30 06:08:05.652626 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:08:05.652637 | orchestrator | Friday 30 January 2026 06:07:48 +0000 (0:00:01.120) 0:19:42.432 ******** 2026-01-30 06:08:05.652648 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652671 | orchestrator | 2026-01-30 06:08:05.652682 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:08:05.652693 | orchestrator | Friday 30 January 2026 06:07:49 +0000 (0:00:01.152) 0:19:43.585 ******** 2026-01-30 06:08:05.652704 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652715 | orchestrator | 2026-01-30 06:08:05.652730 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:08:05.652748 | orchestrator | Friday 30 January 2026 06:07:51 +0000 (0:00:01.205) 0:19:44.791 ******** 2026-01-30 06:08:05.652767 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652786 | orchestrator | 2026-01-30 06:08:05.652805 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:08:05.652822 | orchestrator | Friday 30 January 2026 06:07:52 +0000 (0:00:01.098) 0:19:45.889 ******** 2026-01-30 06:08:05.652841 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652860 | orchestrator | 2026-01-30 06:08:05.652880 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:08:05.652899 | orchestrator | Friday 30 January 2026 06:07:53 +0000 (0:00:01.205) 0:19:47.095 ******** 2026-01-30 06:08:05.652918 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.652929 | orchestrator | 2026-01-30 06:08:05.653009 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:08:05.653020 | orchestrator | Friday 30 January 2026 06:07:54 +0000 (0:00:01.085) 0:19:48.181 ******** 2026-01-30 06:08:05.653031 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653042 | orchestrator | 2026-01-30 06:08:05.653053 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:08:05.653066 | orchestrator | Friday 30 January 2026 06:07:55 +0000 (0:00:00.924) 0:19:49.105 ******** 2026-01-30 06:08:05.653076 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653087 | orchestrator | 2026-01-30 06:08:05.653098 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:08:05.653109 | orchestrator | Friday 30 January 2026 06:07:56 +0000 (0:00:00.898) 0:19:50.004 ******** 2026-01-30 06:08:05.653120 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653131 | orchestrator | 2026-01-30 06:08:05.653142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:08:05.653152 | orchestrator | Friday 30 January 2026 06:07:57 +0000 (0:00:00.887) 0:19:50.891 ******** 2026-01-30 06:08:05.653163 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653174 | orchestrator | 2026-01-30 06:08:05.653185 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:08:05.653195 | orchestrator | Friday 30 January 2026 06:07:58 +0000 (0:00:00.939) 0:19:51.830 ******** 2026-01-30 06:08:05.653206 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653218 | orchestrator | 2026-01-30 06:08:05.653237 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:08:05.653262 | orchestrator | Friday 30 January 2026 06:07:59 +0000 (0:00:00.898) 0:19:52.729 ******** 2026-01-30 06:08:05.653288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:08:05.653308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:08:05.653326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:08:05.653357 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653376 | orchestrator | 2026-01-30 06:08:05.653393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:08:05.653413 | orchestrator | Friday 30 January 2026 06:08:00 +0000 (0:00:01.561) 0:19:54.291 ******** 2026-01-30 06:08:05.653431 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:08:05.653452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:08:05.653468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:08:05.653486 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653555 | orchestrator | 2026-01-30 06:08:05.653574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:08:05.653593 | orchestrator | Friday 30 January 2026 06:08:02 +0000 (0:00:01.339) 0:19:55.631 ******** 2026-01-30 06:08:05.653613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:08:05.653630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:08:05.653649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:08:05.653663 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653674 | orchestrator | 2026-01-30 06:08:05.653685 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:08:05.653696 | orchestrator | Friday 30 January 2026 06:08:03 +0000 (0:00:01.310) 0:19:56.941 ******** 2026-01-30 06:08:05.653707 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:05.653717 | orchestrator | 2026-01-30 06:08:05.653728 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:08:05.653739 | orchestrator | Friday 30 January 2026 06:08:04 +0000 (0:00:01.098) 0:19:58.039 ******** 2026-01-30 06:08:05.653751 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-30 06:08:05.653775 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.329867 | orchestrator | 2026-01-30 06:08:38.330011 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:08:38.330070 | orchestrator | Friday 30 January 2026 06:08:05 +0000 (0:00:01.209) 0:19:59.249 ******** 2026-01-30 06:08:38.330078 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330086 | orchestrator | 2026-01-30 06:08:38.330093 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:08:38.330100 | orchestrator | Friday 30 January 2026 06:08:06 +0000 (0:00:01.137) 0:20:00.387 ******** 2026-01-30 06:08:38.330107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 06:08:38.330115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 06:08:38.330121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 06:08:38.330128 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330135 | orchestrator | 2026-01-30 06:08:38.330141 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:08:38.330148 | orchestrator | Friday 30 January 2026 06:08:08 +0000 (0:00:01.353) 0:20:01.740 ******** 2026-01-30 06:08:38.330155 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330161 | orchestrator | 2026-01-30 06:08:38.330168 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:08:38.330174 | orchestrator | Friday 30 January 2026 06:08:09 +0000 (0:00:01.121) 0:20:02.862 ******** 2026-01-30 06:08:38.330181 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330188 | orchestrator | 2026-01-30 06:08:38.330194 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:08:38.330201 | orchestrator | Friday 30 January 2026 06:08:10 +0000 (0:00:01.119) 0:20:03.981 ******** 2026-01-30 06:08:38.330207 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330214 | orchestrator | 2026-01-30 06:08:38.330221 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:08:38.330227 | orchestrator | Friday 30 January 2026 06:08:11 +0000 (0:00:01.129) 0:20:05.110 ******** 2026-01-30 06:08:38.330234 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:08:38.330240 | orchestrator | 2026-01-30 06:08:38.330247 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-01-30 06:08:38.330254 | orchestrator | 2026-01-30 06:08:38.330260 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:08:38.330267 | orchestrator | Friday 30 January 2026 06:08:12 +0000 (0:00:01.241) 0:20:06.352 ******** 2026-01-30 06:08:38.330274 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330280 | orchestrator | 2026-01-30 06:08:38.330287 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:08:38.330313 | orchestrator | Friday 30 January 2026 06:08:13 +0000 (0:00:00.773) 0:20:07.125 ******** 2026-01-30 06:08:38.330320 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330327 | orchestrator | 2026-01-30 06:08:38.330333 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:08:38.330340 | orchestrator | Friday 30 January 2026 06:08:14 +0000 (0:00:00.777) 0:20:07.902 ******** 2026-01-30 06:08:38.330346 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330353 | orchestrator | 2026-01-30 06:08:38.330360 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:08:38.330366 | orchestrator | Friday 30 January 2026 06:08:15 +0000 (0:00:00.754) 0:20:08.656 ******** 2026-01-30 06:08:38.330373 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330379 | orchestrator | 2026-01-30 06:08:38.330386 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:08:38.330393 | orchestrator | Friday 30 January 2026 06:08:15 +0000 (0:00:00.775) 0:20:09.432 ******** 2026-01-30 06:08:38.330399 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330406 | orchestrator | 2026-01-30 06:08:38.330412 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:08:38.330419 | orchestrator | Friday 30 January 2026 06:08:16 +0000 (0:00:00.764) 0:20:10.197 ******** 2026-01-30 06:08:38.330427 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330438 | orchestrator | 2026-01-30 06:08:38.330448 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:08:38.330460 | orchestrator | Friday 30 January 2026 06:08:17 +0000 (0:00:00.766) 0:20:10.964 ******** 2026-01-30 06:08:38.330485 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330498 | orchestrator | 2026-01-30 06:08:38.330509 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:08:38.330521 | orchestrator | Friday 30 January 2026 06:08:18 +0000 (0:00:00.754) 0:20:11.718 ******** 2026-01-30 06:08:38.330530 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330538 | orchestrator | 2026-01-30 06:08:38.330546 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:08:38.330554 | orchestrator | Friday 30 January 2026 06:08:18 +0000 (0:00:00.755) 0:20:12.474 ******** 2026-01-30 06:08:38.330561 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330569 | orchestrator | 2026-01-30 06:08:38.330576 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:08:38.330584 | orchestrator | Friday 30 January 2026 06:08:19 +0000 (0:00:00.754) 0:20:13.228 ******** 2026-01-30 06:08:38.330591 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330600 | orchestrator | 2026-01-30 06:08:38.330607 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:08:38.330614 | orchestrator | Friday 30 January 2026 06:08:20 +0000 (0:00:00.803) 0:20:14.032 ******** 2026-01-30 06:08:38.330622 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330629 | orchestrator | 2026-01-30 06:08:38.330638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:08:38.330645 | orchestrator | Friday 30 January 2026 06:08:21 +0000 (0:00:00.788) 0:20:14.820 ******** 2026-01-30 06:08:38.330653 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330661 | orchestrator | 2026-01-30 06:08:38.330668 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:08:38.330676 | orchestrator | Friday 30 January 2026 06:08:22 +0000 (0:00:00.853) 0:20:15.674 ******** 2026-01-30 06:08:38.330684 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330691 | orchestrator | 2026-01-30 06:08:38.330713 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:08:38.330721 | orchestrator | Friday 30 January 2026 06:08:22 +0000 (0:00:00.758) 0:20:16.433 ******** 2026-01-30 06:08:38.330729 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330736 | orchestrator | 2026-01-30 06:08:38.330744 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:08:38.330758 | orchestrator | Friday 30 January 2026 06:08:23 +0000 (0:00:00.775) 0:20:17.208 ******** 2026-01-30 06:08:38.330766 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330774 | orchestrator | 2026-01-30 06:08:38.330782 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:08:38.330789 | orchestrator | Friday 30 January 2026 06:08:24 +0000 (0:00:00.778) 0:20:17.987 ******** 2026-01-30 06:08:38.330797 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330804 | orchestrator | 2026-01-30 06:08:38.330810 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:08:38.330817 | orchestrator | Friday 30 January 2026 06:08:25 +0000 (0:00:00.766) 0:20:18.754 ******** 2026-01-30 06:08:38.330823 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330830 | orchestrator | 2026-01-30 06:08:38.330836 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:08:38.330843 | orchestrator | Friday 30 January 2026 06:08:25 +0000 (0:00:00.782) 0:20:19.536 ******** 2026-01-30 06:08:38.330849 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330856 | orchestrator | 2026-01-30 06:08:38.330862 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:08:38.330869 | orchestrator | Friday 30 January 2026 06:08:26 +0000 (0:00:00.767) 0:20:20.304 ******** 2026-01-30 06:08:38.330875 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330882 | orchestrator | 2026-01-30 06:08:38.330889 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:08:38.330895 | orchestrator | Friday 30 January 2026 06:08:27 +0000 (0:00:00.754) 0:20:21.059 ******** 2026-01-30 06:08:38.330902 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330928 | orchestrator | 2026-01-30 06:08:38.330936 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:08:38.330942 | orchestrator | Friday 30 January 2026 06:08:28 +0000 (0:00:00.772) 0:20:21.832 ******** 2026-01-30 06:08:38.330950 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.330964 | orchestrator | 2026-01-30 06:08:38.330974 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:08:38.330984 | orchestrator | Friday 30 January 2026 06:08:28 +0000 (0:00:00.767) 0:20:22.600 ******** 2026-01-30 06:08:38.330995 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331005 | orchestrator | 2026-01-30 06:08:38.331016 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:08:38.331027 | orchestrator | Friday 30 January 2026 06:08:29 +0000 (0:00:00.784) 0:20:23.385 ******** 2026-01-30 06:08:38.331037 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331048 | orchestrator | 2026-01-30 06:08:38.331059 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:08:38.331071 | orchestrator | Friday 30 January 2026 06:08:30 +0000 (0:00:00.771) 0:20:24.156 ******** 2026-01-30 06:08:38.331082 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331094 | orchestrator | 2026-01-30 06:08:38.331105 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:08:38.331117 | orchestrator | Friday 30 January 2026 06:08:31 +0000 (0:00:00.864) 0:20:25.021 ******** 2026-01-30 06:08:38.331129 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331137 | orchestrator | 2026-01-30 06:08:38.331143 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:08:38.331150 | orchestrator | Friday 30 January 2026 06:08:32 +0000 (0:00:00.765) 0:20:25.786 ******** 2026-01-30 06:08:38.331156 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331162 | orchestrator | 2026-01-30 06:08:38.331169 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:08:38.331175 | orchestrator | Friday 30 January 2026 06:08:32 +0000 (0:00:00.779) 0:20:26.565 ******** 2026-01-30 06:08:38.331182 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331188 | orchestrator | 2026-01-30 06:08:38.331200 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:08:38.331213 | orchestrator | Friday 30 January 2026 06:08:33 +0000 (0:00:00.758) 0:20:27.324 ******** 2026-01-30 06:08:38.331220 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331226 | orchestrator | 2026-01-30 06:08:38.331233 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:08:38.331239 | orchestrator | Friday 30 January 2026 06:08:34 +0000 (0:00:00.771) 0:20:28.096 ******** 2026-01-30 06:08:38.331246 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331252 | orchestrator | 2026-01-30 06:08:38.331259 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:08:38.331265 | orchestrator | Friday 30 January 2026 06:08:35 +0000 (0:00:00.768) 0:20:28.864 ******** 2026-01-30 06:08:38.331272 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331278 | orchestrator | 2026-01-30 06:08:38.331285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:08:38.331291 | orchestrator | Friday 30 January 2026 06:08:36 +0000 (0:00:00.770) 0:20:29.634 ******** 2026-01-30 06:08:38.331298 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331304 | orchestrator | 2026-01-30 06:08:38.331311 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:08:38.331317 | orchestrator | Friday 30 January 2026 06:08:36 +0000 (0:00:00.757) 0:20:30.392 ******** 2026-01-30 06:08:38.331324 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331330 | orchestrator | 2026-01-30 06:08:38.331337 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:08:38.331343 | orchestrator | Friday 30 January 2026 06:08:37 +0000 (0:00:00.763) 0:20:31.155 ******** 2026-01-30 06:08:38.331350 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:08:38.331357 | orchestrator | 2026-01-30 06:08:38.331429 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:09:06.951106 | orchestrator | Friday 30 January 2026 06:08:38 +0000 (0:00:00.773) 0:20:31.928 ******** 2026-01-30 06:09:06.951235 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951254 | orchestrator | 2026-01-30 06:09:06.951271 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:09:06.951286 | orchestrator | Friday 30 January 2026 06:08:39 +0000 (0:00:00.759) 0:20:32.688 ******** 2026-01-30 06:09:06.951302 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951317 | orchestrator | 2026-01-30 06:09:06.951333 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:09:06.951348 | orchestrator | Friday 30 January 2026 06:08:39 +0000 (0:00:00.805) 0:20:33.494 ******** 2026-01-30 06:09:06.951362 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951377 | orchestrator | 2026-01-30 06:09:06.951391 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:09:06.951406 | orchestrator | Friday 30 January 2026 06:08:40 +0000 (0:00:00.761) 0:20:34.255 ******** 2026-01-30 06:09:06.951422 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951438 | orchestrator | 2026-01-30 06:09:06.951454 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:09:06.951470 | orchestrator | Friday 30 January 2026 06:08:41 +0000 (0:00:00.781) 0:20:35.037 ******** 2026-01-30 06:09:06.951487 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951502 | orchestrator | 2026-01-30 06:09:06.951518 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:09:06.951534 | orchestrator | Friday 30 January 2026 06:08:42 +0000 (0:00:00.753) 0:20:35.790 ******** 2026-01-30 06:09:06.951551 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951567 | orchestrator | 2026-01-30 06:09:06.951583 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:09:06.951601 | orchestrator | Friday 30 January 2026 06:08:42 +0000 (0:00:00.768) 0:20:36.558 ******** 2026-01-30 06:09:06.951618 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951637 | orchestrator | 2026-01-30 06:09:06.951688 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:09:06.951706 | orchestrator | Friday 30 January 2026 06:08:43 +0000 (0:00:00.766) 0:20:37.325 ******** 2026-01-30 06:09:06.951723 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951741 | orchestrator | 2026-01-30 06:09:06.951759 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:09:06.951777 | orchestrator | Friday 30 January 2026 06:08:44 +0000 (0:00:00.759) 0:20:38.084 ******** 2026-01-30 06:09:06.951795 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951811 | orchestrator | 2026-01-30 06:09:06.951827 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:09:06.951842 | orchestrator | Friday 30 January 2026 06:08:45 +0000 (0:00:00.763) 0:20:38.848 ******** 2026-01-30 06:09:06.951857 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951871 | orchestrator | 2026-01-30 06:09:06.951887 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:09:06.951930 | orchestrator | Friday 30 January 2026 06:08:45 +0000 (0:00:00.754) 0:20:39.602 ******** 2026-01-30 06:09:06.951945 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.951960 | orchestrator | 2026-01-30 06:09:06.951974 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:09:06.951989 | orchestrator | Friday 30 January 2026 06:08:46 +0000 (0:00:00.622) 0:20:40.225 ******** 2026-01-30 06:09:06.952004 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952019 | orchestrator | 2026-01-30 06:09:06.952033 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:09:06.952048 | orchestrator | Friday 30 January 2026 06:08:47 +0000 (0:00:00.630) 0:20:40.855 ******** 2026-01-30 06:09:06.952064 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952079 | orchestrator | 2026-01-30 06:09:06.952095 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:09:06.952111 | orchestrator | Friday 30 January 2026 06:08:47 +0000 (0:00:00.720) 0:20:41.576 ******** 2026-01-30 06:09:06.952125 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952139 | orchestrator | 2026-01-30 06:09:06.952170 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:09:06.952186 | orchestrator | Friday 30 January 2026 06:08:48 +0000 (0:00:00.624) 0:20:42.200 ******** 2026-01-30 06:09:06.952201 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952215 | orchestrator | 2026-01-30 06:09:06.952229 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:09:06.952244 | orchestrator | Friday 30 January 2026 06:08:49 +0000 (0:00:01.044) 0:20:43.245 ******** 2026-01-30 06:09:06.952259 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952275 | orchestrator | 2026-01-30 06:09:06.952286 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:09:06.952295 | orchestrator | Friday 30 January 2026 06:08:50 +0000 (0:00:00.614) 0:20:43.860 ******** 2026-01-30 06:09:06.952304 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952312 | orchestrator | 2026-01-30 06:09:06.952321 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:09:06.952331 | orchestrator | Friday 30 January 2026 06:08:50 +0000 (0:00:00.687) 0:20:44.548 ******** 2026-01-30 06:09:06.952340 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952349 | orchestrator | 2026-01-30 06:09:06.952357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:09:06.952366 | orchestrator | Friday 30 January 2026 06:08:51 +0000 (0:00:00.735) 0:20:45.283 ******** 2026-01-30 06:09:06.952375 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952384 | orchestrator | 2026-01-30 06:09:06.952392 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:09:06.952401 | orchestrator | Friday 30 January 2026 06:08:52 +0000 (0:00:00.762) 0:20:46.046 ******** 2026-01-30 06:09:06.952423 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952432 | orchestrator | 2026-01-30 06:09:06.952462 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:09:06.952472 | orchestrator | Friday 30 January 2026 06:08:53 +0000 (0:00:00.739) 0:20:46.785 ******** 2026-01-30 06:09:06.952481 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952490 | orchestrator | 2026-01-30 06:09:06.952499 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:09:06.952507 | orchestrator | Friday 30 January 2026 06:08:53 +0000 (0:00:00.740) 0:20:47.526 ******** 2026-01-30 06:09:06.952516 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:09:06.952525 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:09:06.952534 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:09:06.952542 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952551 | orchestrator | 2026-01-30 06:09:06.952560 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:09:06.952568 | orchestrator | Friday 30 January 2026 06:08:54 +0000 (0:00:01.017) 0:20:48.543 ******** 2026-01-30 06:09:06.952577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:09:06.952586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:09:06.952594 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:09:06.952603 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952611 | orchestrator | 2026-01-30 06:09:06.952620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:09:06.952629 | orchestrator | Friday 30 January 2026 06:08:55 +0000 (0:00:01.049) 0:20:49.592 ******** 2026-01-30 06:09:06.952637 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:09:06.952646 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:09:06.952654 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:09:06.952663 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952672 | orchestrator | 2026-01-30 06:09:06.952680 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:09:06.952689 | orchestrator | Friday 30 January 2026 06:08:56 +0000 (0:00:01.017) 0:20:50.610 ******** 2026-01-30 06:09:06.952703 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952717 | orchestrator | 2026-01-30 06:09:06.952730 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:09:06.952743 | orchestrator | Friday 30 January 2026 06:08:57 +0000 (0:00:00.776) 0:20:51.387 ******** 2026-01-30 06:09:06.952757 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-30 06:09:06.952769 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952782 | orchestrator | 2026-01-30 06:09:06.952796 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:09:06.952809 | orchestrator | Friday 30 January 2026 06:08:58 +0000 (0:00:00.875) 0:20:52.262 ******** 2026-01-30 06:09:06.952824 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.952839 | orchestrator | 2026-01-30 06:09:06.952853 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:09:06.952866 | orchestrator | Friday 30 January 2026 06:08:59 +0000 (0:00:00.904) 0:20:53.167 ******** 2026-01-30 06:09:06.952881 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 06:09:06.952965 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 06:09:06.952981 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 06:09:06.952996 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.953010 | orchestrator | 2026-01-30 06:09:06.953022 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:09:06.953031 | orchestrator | Friday 30 January 2026 06:09:00 +0000 (0:00:01.052) 0:20:54.220 ******** 2026-01-30 06:09:06.953049 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.953058 | orchestrator | 2026-01-30 06:09:06.953067 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:09:06.953075 | orchestrator | Friday 30 January 2026 06:09:01 +0000 (0:00:00.763) 0:20:54.983 ******** 2026-01-30 06:09:06.953084 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.953093 | orchestrator | 2026-01-30 06:09:06.953108 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:09:06.953117 | orchestrator | Friday 30 January 2026 06:09:02 +0000 (0:00:00.771) 0:20:55.754 ******** 2026-01-30 06:09:06.953126 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.953135 | orchestrator | 2026-01-30 06:09:06.953143 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:09:06.953152 | orchestrator | Friday 30 January 2026 06:09:02 +0000 (0:00:00.778) 0:20:56.533 ******** 2026-01-30 06:09:06.953161 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:09:06.953169 | orchestrator | 2026-01-30 06:09:06.953178 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-01-30 06:09:06.953187 | orchestrator | 2026-01-30 06:09:06.953195 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:09:06.953204 | orchestrator | Friday 30 January 2026 06:09:03 +0000 (0:00:00.952) 0:20:57.486 ******** 2026-01-30 06:09:06.953213 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:06.953221 | orchestrator | 2026-01-30 06:09:06.953230 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:09:06.953238 | orchestrator | Friday 30 January 2026 06:09:04 +0000 (0:00:00.760) 0:20:58.246 ******** 2026-01-30 06:09:06.953247 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:06.953256 | orchestrator | 2026-01-30 06:09:06.953264 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:09:06.953273 | orchestrator | Friday 30 January 2026 06:09:05 +0000 (0:00:00.788) 0:20:59.034 ******** 2026-01-30 06:09:06.953282 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:06.953297 | orchestrator | 2026-01-30 06:09:06.953310 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:09:06.953324 | orchestrator | Friday 30 January 2026 06:09:06 +0000 (0:00:00.750) 0:20:59.785 ******** 2026-01-30 06:09:06.953349 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063066 | orchestrator | 2026-01-30 06:09:38.063188 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:09:38.063199 | orchestrator | Friday 30 January 2026 06:09:06 +0000 (0:00:00.765) 0:21:00.551 ******** 2026-01-30 06:09:38.063207 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063215 | orchestrator | 2026-01-30 06:09:38.063222 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:09:38.063230 | orchestrator | Friday 30 January 2026 06:09:07 +0000 (0:00:00.798) 0:21:01.350 ******** 2026-01-30 06:09:38.063238 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063244 | orchestrator | 2026-01-30 06:09:38.063251 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:09:38.063258 | orchestrator | Friday 30 January 2026 06:09:08 +0000 (0:00:00.809) 0:21:02.160 ******** 2026-01-30 06:09:38.063264 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063271 | orchestrator | 2026-01-30 06:09:38.063278 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:09:38.063284 | orchestrator | Friday 30 January 2026 06:09:09 +0000 (0:00:00.752) 0:21:02.912 ******** 2026-01-30 06:09:38.063291 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063297 | orchestrator | 2026-01-30 06:09:38.063304 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:09:38.063311 | orchestrator | Friday 30 January 2026 06:09:10 +0000 (0:00:00.782) 0:21:03.695 ******** 2026-01-30 06:09:38.063317 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063324 | orchestrator | 2026-01-30 06:09:38.063330 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:09:38.063356 | orchestrator | Friday 30 January 2026 06:09:10 +0000 (0:00:00.811) 0:21:04.506 ******** 2026-01-30 06:09:38.063363 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063370 | orchestrator | 2026-01-30 06:09:38.063376 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:09:38.063383 | orchestrator | Friday 30 January 2026 06:09:11 +0000 (0:00:00.757) 0:21:05.263 ******** 2026-01-30 06:09:38.063390 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063396 | orchestrator | 2026-01-30 06:09:38.063403 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:09:38.063409 | orchestrator | Friday 30 January 2026 06:09:12 +0000 (0:00:00.814) 0:21:06.077 ******** 2026-01-30 06:09:38.063416 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063422 | orchestrator | 2026-01-30 06:09:38.063429 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:09:38.063435 | orchestrator | Friday 30 January 2026 06:09:13 +0000 (0:00:00.774) 0:21:06.852 ******** 2026-01-30 06:09:38.063442 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063448 | orchestrator | 2026-01-30 06:09:38.063455 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:09:38.063462 | orchestrator | Friday 30 January 2026 06:09:14 +0000 (0:00:00.783) 0:21:07.635 ******** 2026-01-30 06:09:38.063468 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063475 | orchestrator | 2026-01-30 06:09:38.063482 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:09:38.063488 | orchestrator | Friday 30 January 2026 06:09:14 +0000 (0:00:00.775) 0:21:08.411 ******** 2026-01-30 06:09:38.063495 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063501 | orchestrator | 2026-01-30 06:09:38.063508 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:09:38.063516 | orchestrator | Friday 30 January 2026 06:09:15 +0000 (0:00:00.757) 0:21:09.168 ******** 2026-01-30 06:09:38.063524 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063532 | orchestrator | 2026-01-30 06:09:38.063540 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:09:38.063547 | orchestrator | Friday 30 January 2026 06:09:16 +0000 (0:00:00.767) 0:21:09.935 ******** 2026-01-30 06:09:38.063555 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063563 | orchestrator | 2026-01-30 06:09:38.063571 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:09:38.063578 | orchestrator | Friday 30 January 2026 06:09:17 +0000 (0:00:00.788) 0:21:10.724 ******** 2026-01-30 06:09:38.063586 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063594 | orchestrator | 2026-01-30 06:09:38.063613 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:09:38.063622 | orchestrator | Friday 30 January 2026 06:09:17 +0000 (0:00:00.769) 0:21:11.494 ******** 2026-01-30 06:09:38.063629 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063637 | orchestrator | 2026-01-30 06:09:38.063646 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:09:38.063654 | orchestrator | Friday 30 January 2026 06:09:18 +0000 (0:00:00.786) 0:21:12.280 ******** 2026-01-30 06:09:38.063663 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063670 | orchestrator | 2026-01-30 06:09:38.063678 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:09:38.063686 | orchestrator | Friday 30 January 2026 06:09:19 +0000 (0:00:00.782) 0:21:13.062 ******** 2026-01-30 06:09:38.063694 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063702 | orchestrator | 2026-01-30 06:09:38.063709 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:09:38.063716 | orchestrator | Friday 30 January 2026 06:09:20 +0000 (0:00:00.794) 0:21:13.857 ******** 2026-01-30 06:09:38.063723 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063729 | orchestrator | 2026-01-30 06:09:38.063741 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:09:38.063748 | orchestrator | Friday 30 January 2026 06:09:21 +0000 (0:00:00.784) 0:21:14.642 ******** 2026-01-30 06:09:38.063755 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063761 | orchestrator | 2026-01-30 06:09:38.063768 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:09:38.063775 | orchestrator | Friday 30 January 2026 06:09:21 +0000 (0:00:00.753) 0:21:15.396 ******** 2026-01-30 06:09:38.063781 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063788 | orchestrator | 2026-01-30 06:09:38.063809 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:09:38.063817 | orchestrator | Friday 30 January 2026 06:09:22 +0000 (0:00:00.764) 0:21:16.160 ******** 2026-01-30 06:09:38.063823 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063830 | orchestrator | 2026-01-30 06:09:38.063836 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:09:38.063843 | orchestrator | Friday 30 January 2026 06:09:23 +0000 (0:00:00.785) 0:21:16.946 ******** 2026-01-30 06:09:38.063850 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063857 | orchestrator | 2026-01-30 06:09:38.063863 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:09:38.063912 | orchestrator | Friday 30 January 2026 06:09:24 +0000 (0:00:00.753) 0:21:17.700 ******** 2026-01-30 06:09:38.063919 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063926 | orchestrator | 2026-01-30 06:09:38.063933 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:09:38.063940 | orchestrator | Friday 30 January 2026 06:09:24 +0000 (0:00:00.773) 0:21:18.474 ******** 2026-01-30 06:09:38.063946 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063953 | orchestrator | 2026-01-30 06:09:38.063960 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:09:38.063966 | orchestrator | Friday 30 January 2026 06:09:25 +0000 (0:00:00.786) 0:21:19.260 ******** 2026-01-30 06:09:38.063973 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.063980 | orchestrator | 2026-01-30 06:09:38.063986 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:09:38.063993 | orchestrator | Friday 30 January 2026 06:09:26 +0000 (0:00:00.811) 0:21:20.072 ******** 2026-01-30 06:09:38.064000 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064006 | orchestrator | 2026-01-30 06:09:38.064013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:09:38.064024 | orchestrator | Friday 30 January 2026 06:09:27 +0000 (0:00:00.797) 0:21:20.870 ******** 2026-01-30 06:09:38.064034 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064044 | orchestrator | 2026-01-30 06:09:38.064051 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:09:38.064057 | orchestrator | Friday 30 January 2026 06:09:28 +0000 (0:00:00.768) 0:21:21.638 ******** 2026-01-30 06:09:38.064064 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064071 | orchestrator | 2026-01-30 06:09:38.064077 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:09:38.064084 | orchestrator | Friday 30 January 2026 06:09:28 +0000 (0:00:00.753) 0:21:22.392 ******** 2026-01-30 06:09:38.064090 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064097 | orchestrator | 2026-01-30 06:09:38.064104 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:09:38.064110 | orchestrator | Friday 30 January 2026 06:09:29 +0000 (0:00:00.818) 0:21:23.211 ******** 2026-01-30 06:09:38.064117 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064124 | orchestrator | 2026-01-30 06:09:38.064130 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:09:38.064137 | orchestrator | Friday 30 January 2026 06:09:30 +0000 (0:00:00.749) 0:21:23.960 ******** 2026-01-30 06:09:38.064144 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064150 | orchestrator | 2026-01-30 06:09:38.064164 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:09:38.064171 | orchestrator | Friday 30 January 2026 06:09:31 +0000 (0:00:00.767) 0:21:24.728 ******** 2026-01-30 06:09:38.064177 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064184 | orchestrator | 2026-01-30 06:09:38.064190 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:09:38.064197 | orchestrator | Friday 30 January 2026 06:09:31 +0000 (0:00:00.768) 0:21:25.496 ******** 2026-01-30 06:09:38.064204 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064210 | orchestrator | 2026-01-30 06:09:38.064217 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:09:38.064223 | orchestrator | Friday 30 January 2026 06:09:32 +0000 (0:00:00.754) 0:21:26.250 ******** 2026-01-30 06:09:38.064230 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064237 | orchestrator | 2026-01-30 06:09:38.064248 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:09:38.064255 | orchestrator | Friday 30 January 2026 06:09:33 +0000 (0:00:00.744) 0:21:26.995 ******** 2026-01-30 06:09:38.064262 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064268 | orchestrator | 2026-01-30 06:09:38.064275 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:09:38.064282 | orchestrator | Friday 30 January 2026 06:09:34 +0000 (0:00:00.761) 0:21:27.756 ******** 2026-01-30 06:09:38.064289 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064296 | orchestrator | 2026-01-30 06:09:38.064302 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:09:38.064309 | orchestrator | Friday 30 January 2026 06:09:34 +0000 (0:00:00.761) 0:21:28.518 ******** 2026-01-30 06:09:38.064316 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064322 | orchestrator | 2026-01-30 06:09:38.064329 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:09:38.064336 | orchestrator | Friday 30 January 2026 06:09:35 +0000 (0:00:00.772) 0:21:29.291 ******** 2026-01-30 06:09:38.064342 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064349 | orchestrator | 2026-01-30 06:09:38.064355 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:09:38.064362 | orchestrator | Friday 30 January 2026 06:09:36 +0000 (0:00:00.797) 0:21:30.089 ******** 2026-01-30 06:09:38.064379 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064386 | orchestrator | 2026-01-30 06:09:38.064445 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:09:38.064454 | orchestrator | Friday 30 January 2026 06:09:37 +0000 (0:00:00.785) 0:21:30.874 ******** 2026-01-30 06:09:38.064461 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:09:38.064468 | orchestrator | 2026-01-30 06:09:38.064480 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:10:29.219676 | orchestrator | Friday 30 January 2026 06:09:38 +0000 (0:00:00.781) 0:21:31.656 ******** 2026-01-30 06:10:29.219762 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219824 | orchestrator | 2026-01-30 06:10:29.219831 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:10:29.219836 | orchestrator | Friday 30 January 2026 06:09:38 +0000 (0:00:00.763) 0:21:32.419 ******** 2026-01-30 06:10:29.219839 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219876 | orchestrator | 2026-01-30 06:10:29.219881 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:10:29.219885 | orchestrator | Friday 30 January 2026 06:09:39 +0000 (0:00:00.843) 0:21:33.263 ******** 2026-01-30 06:10:29.219889 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219893 | orchestrator | 2026-01-30 06:10:29.219897 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:10:29.219902 | orchestrator | Friday 30 January 2026 06:09:40 +0000 (0:00:00.799) 0:21:34.062 ******** 2026-01-30 06:10:29.219923 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219927 | orchestrator | 2026-01-30 06:10:29.219931 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:10:29.219935 | orchestrator | Friday 30 January 2026 06:09:41 +0000 (0:00:00.875) 0:21:34.937 ******** 2026-01-30 06:10:29.219939 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219943 | orchestrator | 2026-01-30 06:10:29.219947 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:10:29.219951 | orchestrator | Friday 30 January 2026 06:09:42 +0000 (0:00:00.780) 0:21:35.718 ******** 2026-01-30 06:10:29.219955 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219958 | orchestrator | 2026-01-30 06:10:29.219963 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:10:29.219968 | orchestrator | Friday 30 January 2026 06:09:42 +0000 (0:00:00.828) 0:21:36.547 ******** 2026-01-30 06:10:29.219972 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219976 | orchestrator | 2026-01-30 06:10:29.219980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:10:29.219983 | orchestrator | Friday 30 January 2026 06:09:43 +0000 (0:00:00.759) 0:21:37.306 ******** 2026-01-30 06:10:29.219987 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.219991 | orchestrator | 2026-01-30 06:10:29.219995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:10:29.219998 | orchestrator | Friday 30 January 2026 06:09:44 +0000 (0:00:00.770) 0:21:38.077 ******** 2026-01-30 06:10:29.220002 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220006 | orchestrator | 2026-01-30 06:10:29.220010 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:10:29.220013 | orchestrator | Friday 30 January 2026 06:09:45 +0000 (0:00:00.794) 0:21:38.871 ******** 2026-01-30 06:10:29.220017 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220021 | orchestrator | 2026-01-30 06:10:29.220025 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:10:29.220028 | orchestrator | Friday 30 January 2026 06:09:46 +0000 (0:00:00.798) 0:21:39.670 ******** 2026-01-30 06:10:29.220032 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:10:29.220036 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:10:29.220040 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:10:29.220044 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220048 | orchestrator | 2026-01-30 06:10:29.220051 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:10:29.220055 | orchestrator | Friday 30 January 2026 06:09:47 +0000 (0:00:01.392) 0:21:41.063 ******** 2026-01-30 06:10:29.220059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:10:29.220063 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:10:29.220067 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:10:29.220070 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220074 | orchestrator | 2026-01-30 06:10:29.220088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:10:29.220092 | orchestrator | Friday 30 January 2026 06:09:48 +0000 (0:00:01.034) 0:21:42.097 ******** 2026-01-30 06:10:29.220096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:10:29.220100 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:10:29.220103 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:10:29.220107 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220111 | orchestrator | 2026-01-30 06:10:29.220115 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:10:29.220118 | orchestrator | Friday 30 January 2026 06:09:49 +0000 (0:00:01.032) 0:21:43.130 ******** 2026-01-30 06:10:29.220129 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220132 | orchestrator | 2026-01-30 06:10:29.220137 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:10:29.220140 | orchestrator | Friday 30 January 2026 06:09:50 +0000 (0:00:00.838) 0:21:43.969 ******** 2026-01-30 06:10:29.220145 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-30 06:10:29.220148 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220152 | orchestrator | 2026-01-30 06:10:29.220156 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:10:29.220160 | orchestrator | Friday 30 January 2026 06:09:51 +0000 (0:00:00.910) 0:21:44.880 ******** 2026-01-30 06:10:29.220163 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220167 | orchestrator | 2026-01-30 06:10:29.220171 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:10:29.220174 | orchestrator | Friday 30 January 2026 06:09:52 +0000 (0:00:00.754) 0:21:45.635 ******** 2026-01-30 06:10:29.220178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:10:29.220193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:10:29.220197 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:10:29.220201 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220205 | orchestrator | 2026-01-30 06:10:29.220209 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:10:29.220212 | orchestrator | Friday 30 January 2026 06:09:53 +0000 (0:00:01.025) 0:21:46.660 ******** 2026-01-30 06:10:29.220216 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220220 | orchestrator | 2026-01-30 06:10:29.220224 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:10:29.220227 | orchestrator | Friday 30 January 2026 06:09:53 +0000 (0:00:00.737) 0:21:47.398 ******** 2026-01-30 06:10:29.220231 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220235 | orchestrator | 2026-01-30 06:10:29.220238 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:10:29.220242 | orchestrator | Friday 30 January 2026 06:09:54 +0000 (0:00:00.749) 0:21:48.147 ******** 2026-01-30 06:10:29.220246 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220249 | orchestrator | 2026-01-30 06:10:29.220253 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:10:29.220257 | orchestrator | Friday 30 January 2026 06:09:55 +0000 (0:00:00.716) 0:21:48.864 ******** 2026-01-30 06:10:29.220261 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:10:29.220264 | orchestrator | 2026-01-30 06:10:29.220268 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-01-30 06:10:29.220272 | orchestrator | 2026-01-30 06:10:29.220275 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:10:29.220279 | orchestrator | Friday 30 January 2026 06:09:56 +0000 (0:00:01.447) 0:21:50.312 ******** 2026-01-30 06:10:29.220283 | orchestrator | changed: [testbed-node-0] 2026-01-30 06:10:29.220287 | orchestrator | 2026-01-30 06:10:29.220291 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-01-30 06:10:29.220294 | orchestrator | Friday 30 January 2026 06:10:09 +0000 (0:00:13.053) 0:22:03.366 ******** 2026-01-30 06:10:29.220298 | orchestrator | changed: [testbed-node-0] 2026-01-30 06:10:29.220302 | orchestrator | 2026-01-30 06:10:29.220305 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:10:29.220309 | orchestrator | Friday 30 January 2026 06:10:12 +0000 (0:00:02.673) 0:22:06.039 ******** 2026-01-30 06:10:29.220313 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-01-30 06:10:29.220317 | orchestrator | 2026-01-30 06:10:29.220320 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:10:29.220324 | orchestrator | Friday 30 January 2026 06:10:13 +0000 (0:00:01.123) 0:22:07.163 ******** 2026-01-30 06:10:29.220328 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220335 | orchestrator | 2026-01-30 06:10:29.220339 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:10:29.220343 | orchestrator | Friday 30 January 2026 06:10:15 +0000 (0:00:01.472) 0:22:08.635 ******** 2026-01-30 06:10:29.220347 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220350 | orchestrator | 2026-01-30 06:10:29.220354 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:10:29.220358 | orchestrator | Friday 30 January 2026 06:10:16 +0000 (0:00:01.161) 0:22:09.797 ******** 2026-01-30 06:10:29.220362 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220365 | orchestrator | 2026-01-30 06:10:29.220369 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:10:29.220373 | orchestrator | Friday 30 January 2026 06:10:17 +0000 (0:00:01.484) 0:22:11.281 ******** 2026-01-30 06:10:29.220376 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220380 | orchestrator | 2026-01-30 06:10:29.220384 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:10:29.220388 | orchestrator | Friday 30 January 2026 06:10:18 +0000 (0:00:01.124) 0:22:12.405 ******** 2026-01-30 06:10:29.220391 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220395 | orchestrator | 2026-01-30 06:10:29.220399 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:10:29.220405 | orchestrator | Friday 30 January 2026 06:10:19 +0000 (0:00:01.122) 0:22:13.528 ******** 2026-01-30 06:10:29.220409 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220412 | orchestrator | 2026-01-30 06:10:29.220416 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:10:29.220421 | orchestrator | Friday 30 January 2026 06:10:21 +0000 (0:00:01.176) 0:22:14.704 ******** 2026-01-30 06:10:29.220424 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:29.220428 | orchestrator | 2026-01-30 06:10:29.220432 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:10:29.220436 | orchestrator | Friday 30 January 2026 06:10:22 +0000 (0:00:01.146) 0:22:15.850 ******** 2026-01-30 06:10:29.220439 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220443 | orchestrator | 2026-01-30 06:10:29.220447 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:10:29.220450 | orchestrator | Friday 30 January 2026 06:10:23 +0000 (0:00:01.186) 0:22:17.037 ******** 2026-01-30 06:10:29.220454 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:10:29.220458 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:10:29.220462 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:10:29.220466 | orchestrator | 2026-01-30 06:10:29.220469 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:10:29.220473 | orchestrator | Friday 30 January 2026 06:10:25 +0000 (0:00:01.669) 0:22:18.707 ******** 2026-01-30 06:10:29.220477 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:29.220481 | orchestrator | 2026-01-30 06:10:29.220484 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:10:29.220488 | orchestrator | Friday 30 January 2026 06:10:26 +0000 (0:00:01.225) 0:22:19.933 ******** 2026-01-30 06:10:29.220492 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:10:29.220498 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:10:52.264490 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:10:52.264584 | orchestrator | 2026-01-30 06:10:52.264595 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:10:52.264603 | orchestrator | Friday 30 January 2026 06:10:29 +0000 (0:00:02.885) 0:22:22.818 ******** 2026-01-30 06:10:52.264610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 06:10:52.264617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 06:10:52.264624 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 06:10:52.264650 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.264661 | orchestrator | 2026-01-30 06:10:52.264672 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:10:52.264682 | orchestrator | Friday 30 January 2026 06:10:30 +0000 (0:00:01.468) 0:22:24.287 ******** 2026-01-30 06:10:52.264698 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264722 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264732 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.264743 | orchestrator | 2026-01-30 06:10:52.264753 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:10:52.264762 | orchestrator | Friday 30 January 2026 06:10:32 +0000 (0:00:01.613) 0:22:25.901 ******** 2026-01-30 06:10:52.264775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264787 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264806 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:10:52.264813 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.264819 | orchestrator | 2026-01-30 06:10:52.264825 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:10:52.264878 | orchestrator | Friday 30 January 2026 06:10:33 +0000 (0:00:01.195) 0:22:27.096 ******** 2026-01-30 06:10:52.264888 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:10:26.868216', 'end': '2026-01-30 06:10:26.909871', 'delta': '0:00:00.041655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:10:52.264915 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:10:27.484568', 'end': '2026-01-30 06:10:27.536264', 'delta': '0:00:00.051696', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:10:52.264930 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:10:28.038879', 'end': '2026-01-30 06:10:28.083555', 'delta': '0:00:00.044676', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:10:52.264937 | orchestrator | 2026-01-30 06:10:52.264943 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:10:52.264949 | orchestrator | Friday 30 January 2026 06:10:34 +0000 (0:00:01.189) 0:22:28.286 ******** 2026-01-30 06:10:52.264961 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:52.264973 | orchestrator | 2026-01-30 06:10:52.264983 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:10:52.264994 | orchestrator | Friday 30 January 2026 06:10:35 +0000 (0:00:01.232) 0:22:29.518 ******** 2026-01-30 06:10:52.265005 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265014 | orchestrator | 2026-01-30 06:10:52.265024 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:10:52.265034 | orchestrator | Friday 30 January 2026 06:10:37 +0000 (0:00:01.220) 0:22:30.739 ******** 2026-01-30 06:10:52.265045 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:52.265057 | orchestrator | 2026-01-30 06:10:52.265066 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:10:52.265074 | orchestrator | Friday 30 January 2026 06:10:38 +0000 (0:00:01.227) 0:22:31.967 ******** 2026-01-30 06:10:52.265081 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:52.265088 | orchestrator | 2026-01-30 06:10:52.265095 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:10:52.265102 | orchestrator | Friday 30 January 2026 06:10:40 +0000 (0:00:02.059) 0:22:34.026 ******** 2026-01-30 06:10:52.265109 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:10:52.265116 | orchestrator | 2026-01-30 06:10:52.265124 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:10:52.265131 | orchestrator | Friday 30 January 2026 06:10:41 +0000 (0:00:01.136) 0:22:35.163 ******** 2026-01-30 06:10:52.265138 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265145 | orchestrator | 2026-01-30 06:10:52.265152 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:10:52.265159 | orchestrator | Friday 30 January 2026 06:10:42 +0000 (0:00:01.106) 0:22:36.270 ******** 2026-01-30 06:10:52.265166 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265173 | orchestrator | 2026-01-30 06:10:52.265180 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:10:52.265192 | orchestrator | Friday 30 January 2026 06:10:44 +0000 (0:00:01.661) 0:22:37.931 ******** 2026-01-30 06:10:52.265200 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265207 | orchestrator | 2026-01-30 06:10:52.265214 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:10:52.265227 | orchestrator | Friday 30 January 2026 06:10:45 +0000 (0:00:01.127) 0:22:39.059 ******** 2026-01-30 06:10:52.265234 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265240 | orchestrator | 2026-01-30 06:10:52.265248 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:10:52.265255 | orchestrator | Friday 30 January 2026 06:10:46 +0000 (0:00:01.131) 0:22:40.191 ******** 2026-01-30 06:10:52.265261 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265289 | orchestrator | 2026-01-30 06:10:52.265304 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:10:52.265311 | orchestrator | Friday 30 January 2026 06:10:47 +0000 (0:00:01.109) 0:22:41.300 ******** 2026-01-30 06:10:52.265318 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265326 | orchestrator | 2026-01-30 06:10:52.265332 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:10:52.265340 | orchestrator | Friday 30 January 2026 06:10:48 +0000 (0:00:01.169) 0:22:42.470 ******** 2026-01-30 06:10:52.265346 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265354 | orchestrator | 2026-01-30 06:10:52.265361 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:10:52.265368 | orchestrator | Friday 30 January 2026 06:10:49 +0000 (0:00:01.107) 0:22:43.577 ******** 2026-01-30 06:10:52.265375 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265382 | orchestrator | 2026-01-30 06:10:52.265390 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:10:52.265397 | orchestrator | Friday 30 January 2026 06:10:51 +0000 (0:00:01.178) 0:22:44.756 ******** 2026-01-30 06:10:52.265408 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:52.265418 | orchestrator | 2026-01-30 06:10:52.265444 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:10:54.805266 | orchestrator | Friday 30 January 2026 06:10:52 +0000 (0:00:01.107) 0:22:45.864 ******** 2026-01-30 06:10:54.805360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:10:54.805407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:10:54.805515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:10:54.805535 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:10:54.805542 | orchestrator | 2026-01-30 06:10:54.805550 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:10:54.805557 | orchestrator | Friday 30 January 2026 06:10:53 +0000 (0:00:01.256) 0:22:47.121 ******** 2026-01-30 06:10:54.805568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:10:54.805577 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:10:54.805588 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305284 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305381 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305393 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305428 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305479 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305495 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305516 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:11:05.305528 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:05.305537 | orchestrator | 2026-01-30 06:11:05.305545 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:11:05.305553 | orchestrator | Friday 30 January 2026 06:10:54 +0000 (0:00:01.288) 0:22:48.410 ******** 2026-01-30 06:11:05.305559 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:05.305567 | orchestrator | 2026-01-30 06:11:05.305574 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:11:05.305580 | orchestrator | Friday 30 January 2026 06:10:56 +0000 (0:00:01.572) 0:22:49.983 ******** 2026-01-30 06:11:05.305587 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:05.305594 | orchestrator | 2026-01-30 06:11:05.305600 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:11:05.305611 | orchestrator | Friday 30 January 2026 06:10:57 +0000 (0:00:01.118) 0:22:51.101 ******** 2026-01-30 06:11:05.305618 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:05.305624 | orchestrator | 2026-01-30 06:11:05.305633 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:11:05.305644 | orchestrator | Friday 30 January 2026 06:10:58 +0000 (0:00:01.488) 0:22:52.590 ******** 2026-01-30 06:11:05.305655 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:05.305665 | orchestrator | 2026-01-30 06:11:05.305675 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:11:05.305686 | orchestrator | Friday 30 January 2026 06:11:00 +0000 (0:00:01.138) 0:22:53.728 ******** 2026-01-30 06:11:05.305697 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:05.305709 | orchestrator | 2026-01-30 06:11:05.305719 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:11:05.305731 | orchestrator | Friday 30 January 2026 06:11:01 +0000 (0:00:01.241) 0:22:54.970 ******** 2026-01-30 06:11:05.305739 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:05.305746 | orchestrator | 2026-01-30 06:11:05.305753 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:11:05.305759 | orchestrator | Friday 30 January 2026 06:11:02 +0000 (0:00:01.127) 0:22:56.097 ******** 2026-01-30 06:11:05.305766 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:11:05.305773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 06:11:05.305780 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 06:11:05.305787 | orchestrator | 2026-01-30 06:11:05.305794 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:11:05.305800 | orchestrator | Friday 30 January 2026 06:11:04 +0000 (0:00:01.629) 0:22:57.726 ******** 2026-01-30 06:11:05.305807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 06:11:05.305814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 06:11:05.305821 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 06:11:05.305857 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:05.305865 | orchestrator | 2026-01-30 06:11:05.305879 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:11:47.418795 | orchestrator | Friday 30 January 2026 06:11:05 +0000 (0:00:01.175) 0:22:58.902 ******** 2026-01-30 06:11:47.419097 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.419187 | orchestrator | 2026-01-30 06:11:47.419211 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:11:47.419231 | orchestrator | Friday 30 January 2026 06:11:06 +0000 (0:00:01.120) 0:23:00.022 ******** 2026-01-30 06:11:47.419251 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:11:47.419271 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:11:47.419291 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:11:47.419313 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:11:47.419335 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:11:47.419357 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:11:47.419377 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:11:47.419397 | orchestrator | 2026-01-30 06:11:47.419420 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:11:47.419443 | orchestrator | Friday 30 January 2026 06:11:08 +0000 (0:00:01.632) 0:23:01.655 ******** 2026-01-30 06:11:47.419465 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:11:47.419487 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:11:47.419509 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:11:47.419532 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:11:47.419554 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:11:47.419577 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:11:47.419598 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:11:47.419621 | orchestrator | 2026-01-30 06:11:47.419640 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:11:47.419660 | orchestrator | Friday 30 January 2026 06:11:10 +0000 (0:00:02.235) 0:23:03.891 ******** 2026-01-30 06:11:47.419680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-01-30 06:11:47.419702 | orchestrator | 2026-01-30 06:11:47.419722 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:11:47.419741 | orchestrator | Friday 30 January 2026 06:11:11 +0000 (0:00:01.060) 0:23:04.952 ******** 2026-01-30 06:11:47.419761 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-01-30 06:11:47.419781 | orchestrator | 2026-01-30 06:11:47.419800 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:11:47.419901 | orchestrator | Friday 30 January 2026 06:11:12 +0000 (0:00:01.036) 0:23:05.989 ******** 2026-01-30 06:11:47.419922 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.419942 | orchestrator | 2026-01-30 06:11:47.419963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:11:47.419983 | orchestrator | Friday 30 January 2026 06:11:14 +0000 (0:00:01.647) 0:23:07.636 ******** 2026-01-30 06:11:47.420003 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420023 | orchestrator | 2026-01-30 06:11:47.420064 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:11:47.420084 | orchestrator | Friday 30 January 2026 06:11:15 +0000 (0:00:01.112) 0:23:08.749 ******** 2026-01-30 06:11:47.420105 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420124 | orchestrator | 2026-01-30 06:11:47.420143 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:11:47.420164 | orchestrator | Friday 30 January 2026 06:11:16 +0000 (0:00:01.093) 0:23:09.843 ******** 2026-01-30 06:11:47.420184 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420225 | orchestrator | 2026-01-30 06:11:47.420246 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:11:47.420266 | orchestrator | Friday 30 January 2026 06:11:17 +0000 (0:00:01.077) 0:23:10.920 ******** 2026-01-30 06:11:47.420286 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.420307 | orchestrator | 2026-01-30 06:11:47.420326 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:11:47.420344 | orchestrator | Friday 30 January 2026 06:11:18 +0000 (0:00:01.537) 0:23:12.458 ******** 2026-01-30 06:11:47.420363 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420381 | orchestrator | 2026-01-30 06:11:47.420398 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:11:47.420416 | orchestrator | Friday 30 January 2026 06:11:19 +0000 (0:00:01.094) 0:23:13.553 ******** 2026-01-30 06:11:47.420433 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420451 | orchestrator | 2026-01-30 06:11:47.420469 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:11:47.420487 | orchestrator | Friday 30 January 2026 06:11:21 +0000 (0:00:01.128) 0:23:14.681 ******** 2026-01-30 06:11:47.420504 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.420521 | orchestrator | 2026-01-30 06:11:47.420538 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:11:47.420556 | orchestrator | Friday 30 January 2026 06:11:22 +0000 (0:00:01.504) 0:23:16.186 ******** 2026-01-30 06:11:47.420575 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.420621 | orchestrator | 2026-01-30 06:11:47.420654 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:11:47.420705 | orchestrator | Friday 30 January 2026 06:11:24 +0000 (0:00:01.550) 0:23:17.737 ******** 2026-01-30 06:11:47.420727 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.420896 | orchestrator | 2026-01-30 06:11:47.420919 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:11:47.420937 | orchestrator | Friday 30 January 2026 06:11:25 +0000 (0:00:01.076) 0:23:18.813 ******** 2026-01-30 06:11:47.420956 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.420974 | orchestrator | 2026-01-30 06:11:47.420992 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:11:47.421010 | orchestrator | Friday 30 January 2026 06:11:26 +0000 (0:00:01.090) 0:23:19.903 ******** 2026-01-30 06:11:47.421025 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421036 | orchestrator | 2026-01-30 06:11:47.421047 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:11:47.421057 | orchestrator | Friday 30 January 2026 06:11:27 +0000 (0:00:01.096) 0:23:21.000 ******** 2026-01-30 06:11:47.421068 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421079 | orchestrator | 2026-01-30 06:11:47.421089 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:11:47.421100 | orchestrator | Friday 30 January 2026 06:11:28 +0000 (0:00:01.115) 0:23:22.115 ******** 2026-01-30 06:11:47.421111 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421121 | orchestrator | 2026-01-30 06:11:47.421130 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:11:47.421140 | orchestrator | Friday 30 January 2026 06:11:29 +0000 (0:00:01.142) 0:23:23.257 ******** 2026-01-30 06:11:47.421149 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421159 | orchestrator | 2026-01-30 06:11:47.421168 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:11:47.421178 | orchestrator | Friday 30 January 2026 06:11:30 +0000 (0:00:01.089) 0:23:24.347 ******** 2026-01-30 06:11:47.421188 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421197 | orchestrator | 2026-01-30 06:11:47.421206 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:11:47.421216 | orchestrator | Friday 30 January 2026 06:11:31 +0000 (0:00:01.108) 0:23:25.456 ******** 2026-01-30 06:11:47.421225 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.421250 | orchestrator | 2026-01-30 06:11:47.421260 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:11:47.421270 | orchestrator | Friday 30 January 2026 06:11:32 +0000 (0:00:01.118) 0:23:26.575 ******** 2026-01-30 06:11:47.421284 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.421300 | orchestrator | 2026-01-30 06:11:47.421314 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:11:47.421329 | orchestrator | Friday 30 January 2026 06:11:34 +0000 (0:00:01.133) 0:23:27.708 ******** 2026-01-30 06:11:47.421346 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:11:47.421363 | orchestrator | 2026-01-30 06:11:47.421379 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:11:47.421395 | orchestrator | Friday 30 January 2026 06:11:35 +0000 (0:00:01.119) 0:23:28.827 ******** 2026-01-30 06:11:47.421408 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421418 | orchestrator | 2026-01-30 06:11:47.421428 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:11:47.421437 | orchestrator | Friday 30 January 2026 06:11:36 +0000 (0:00:01.066) 0:23:29.893 ******** 2026-01-30 06:11:47.421447 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421456 | orchestrator | 2026-01-30 06:11:47.421465 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:11:47.421475 | orchestrator | Friday 30 January 2026 06:11:37 +0000 (0:00:01.084) 0:23:30.978 ******** 2026-01-30 06:11:47.421484 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421494 | orchestrator | 2026-01-30 06:11:47.421503 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:11:47.421513 | orchestrator | Friday 30 January 2026 06:11:38 +0000 (0:00:01.167) 0:23:32.145 ******** 2026-01-30 06:11:47.421533 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421543 | orchestrator | 2026-01-30 06:11:47.421552 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:11:47.421562 | orchestrator | Friday 30 January 2026 06:11:39 +0000 (0:00:01.082) 0:23:33.228 ******** 2026-01-30 06:11:47.421572 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421581 | orchestrator | 2026-01-30 06:11:47.421591 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:11:47.421600 | orchestrator | Friday 30 January 2026 06:11:40 +0000 (0:00:01.105) 0:23:34.334 ******** 2026-01-30 06:11:47.421610 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421619 | orchestrator | 2026-01-30 06:11:47.421629 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:11:47.421638 | orchestrator | Friday 30 January 2026 06:11:41 +0000 (0:00:01.115) 0:23:35.450 ******** 2026-01-30 06:11:47.421647 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421657 | orchestrator | 2026-01-30 06:11:47.421706 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:11:47.421719 | orchestrator | Friday 30 January 2026 06:11:42 +0000 (0:00:01.135) 0:23:36.585 ******** 2026-01-30 06:11:47.421728 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421738 | orchestrator | 2026-01-30 06:11:47.421748 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:11:47.421757 | orchestrator | Friday 30 January 2026 06:11:44 +0000 (0:00:01.102) 0:23:37.688 ******** 2026-01-30 06:11:47.421766 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421776 | orchestrator | 2026-01-30 06:11:47.421785 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:11:47.421795 | orchestrator | Friday 30 January 2026 06:11:45 +0000 (0:00:01.105) 0:23:38.793 ******** 2026-01-30 06:11:47.421804 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421937 | orchestrator | 2026-01-30 06:11:47.421953 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:11:47.421963 | orchestrator | Friday 30 January 2026 06:11:46 +0000 (0:00:01.135) 0:23:39.929 ******** 2026-01-30 06:11:47.421973 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:11:47.421994 | orchestrator | 2026-01-30 06:11:47.422089 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:12:37.351705 | orchestrator | Friday 30 January 2026 06:11:47 +0000 (0:00:01.088) 0:23:41.018 ******** 2026-01-30 06:12:37.351792 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.351853 | orchestrator | 2026-01-30 06:12:37.351866 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:12:37.351877 | orchestrator | Friday 30 January 2026 06:11:48 +0000 (0:00:01.140) 0:23:42.159 ******** 2026-01-30 06:12:37.351888 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.351899 | orchestrator | 2026-01-30 06:12:37.351909 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:12:37.351920 | orchestrator | Friday 30 January 2026 06:11:50 +0000 (0:00:02.060) 0:23:44.220 ******** 2026-01-30 06:12:37.351931 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.351941 | orchestrator | 2026-01-30 06:12:37.351950 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:12:37.351960 | orchestrator | Friday 30 January 2026 06:11:53 +0000 (0:00:02.646) 0:23:46.866 ******** 2026-01-30 06:12:37.351972 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-01-30 06:12:37.351984 | orchestrator | 2026-01-30 06:12:37.351994 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:12:37.352004 | orchestrator | Friday 30 January 2026 06:11:54 +0000 (0:00:01.121) 0:23:47.987 ******** 2026-01-30 06:12:37.352016 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352027 | orchestrator | 2026-01-30 06:12:37.352037 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:12:37.352048 | orchestrator | Friday 30 January 2026 06:11:55 +0000 (0:00:01.100) 0:23:49.088 ******** 2026-01-30 06:12:37.352059 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352070 | orchestrator | 2026-01-30 06:12:37.352081 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:12:37.352091 | orchestrator | Friday 30 January 2026 06:11:56 +0000 (0:00:01.098) 0:23:50.187 ******** 2026-01-30 06:12:37.352098 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:12:37.352104 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:12:37.352111 | orchestrator | 2026-01-30 06:12:37.352117 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:12:37.352123 | orchestrator | Friday 30 January 2026 06:11:58 +0000 (0:00:01.931) 0:23:52.119 ******** 2026-01-30 06:12:37.352129 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.352135 | orchestrator | 2026-01-30 06:12:37.352141 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:12:37.352147 | orchestrator | Friday 30 January 2026 06:12:00 +0000 (0:00:01.543) 0:23:53.662 ******** 2026-01-30 06:12:37.352153 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352158 | orchestrator | 2026-01-30 06:12:37.352164 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:12:37.352170 | orchestrator | Friday 30 January 2026 06:12:01 +0000 (0:00:01.116) 0:23:54.778 ******** 2026-01-30 06:12:37.352176 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352182 | orchestrator | 2026-01-30 06:12:37.352188 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:12:37.352193 | orchestrator | Friday 30 January 2026 06:12:02 +0000 (0:00:01.103) 0:23:55.882 ******** 2026-01-30 06:12:37.352199 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352205 | orchestrator | 2026-01-30 06:12:37.352211 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:12:37.352217 | orchestrator | Friday 30 January 2026 06:12:03 +0000 (0:00:01.134) 0:23:57.017 ******** 2026-01-30 06:12:37.352223 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-01-30 06:12:37.352247 | orchestrator | 2026-01-30 06:12:37.352265 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:12:37.352271 | orchestrator | Friday 30 January 2026 06:12:04 +0000 (0:00:01.152) 0:23:58.169 ******** 2026-01-30 06:12:37.352278 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.352285 | orchestrator | 2026-01-30 06:12:37.352292 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:12:37.352299 | orchestrator | Friday 30 January 2026 06:12:06 +0000 (0:00:01.738) 0:23:59.907 ******** 2026-01-30 06:12:37.352305 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:12:37.352312 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:12:37.352318 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:12:37.352326 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352332 | orchestrator | 2026-01-30 06:12:37.352339 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:12:37.352346 | orchestrator | Friday 30 January 2026 06:12:07 +0000 (0:00:01.172) 0:24:01.079 ******** 2026-01-30 06:12:37.352351 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352357 | orchestrator | 2026-01-30 06:12:37.352363 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:12:37.352369 | orchestrator | Friday 30 January 2026 06:12:08 +0000 (0:00:01.144) 0:24:02.224 ******** 2026-01-30 06:12:37.352375 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352380 | orchestrator | 2026-01-30 06:12:37.352386 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:12:37.352392 | orchestrator | Friday 30 January 2026 06:12:09 +0000 (0:00:01.159) 0:24:03.384 ******** 2026-01-30 06:12:37.352397 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352403 | orchestrator | 2026-01-30 06:12:37.352409 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:12:37.352415 | orchestrator | Friday 30 January 2026 06:12:10 +0000 (0:00:01.128) 0:24:04.513 ******** 2026-01-30 06:12:37.352420 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352426 | orchestrator | 2026-01-30 06:12:37.352445 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:12:37.352452 | orchestrator | Friday 30 January 2026 06:12:12 +0000 (0:00:01.122) 0:24:05.636 ******** 2026-01-30 06:12:37.352457 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352463 | orchestrator | 2026-01-30 06:12:37.352469 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:12:37.352475 | orchestrator | Friday 30 January 2026 06:12:13 +0000 (0:00:01.118) 0:24:06.754 ******** 2026-01-30 06:12:37.352481 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.352486 | orchestrator | 2026-01-30 06:12:37.352492 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:12:37.352498 | orchestrator | Friday 30 January 2026 06:12:16 +0000 (0:00:02.989) 0:24:09.744 ******** 2026-01-30 06:12:37.352504 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.352509 | orchestrator | 2026-01-30 06:12:37.352515 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:12:37.352521 | orchestrator | Friday 30 January 2026 06:12:17 +0000 (0:00:01.133) 0:24:10.878 ******** 2026-01-30 06:12:37.352527 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-01-30 06:12:37.352532 | orchestrator | 2026-01-30 06:12:37.352538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:12:37.352544 | orchestrator | Friday 30 January 2026 06:12:18 +0000 (0:00:01.154) 0:24:12.033 ******** 2026-01-30 06:12:37.352550 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352555 | orchestrator | 2026-01-30 06:12:37.352561 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:12:37.352567 | orchestrator | Friday 30 January 2026 06:12:19 +0000 (0:00:01.142) 0:24:13.175 ******** 2026-01-30 06:12:37.352579 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352585 | orchestrator | 2026-01-30 06:12:37.352591 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:12:37.352597 | orchestrator | Friday 30 January 2026 06:12:20 +0000 (0:00:01.183) 0:24:14.359 ******** 2026-01-30 06:12:37.352602 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352608 | orchestrator | 2026-01-30 06:12:37.352614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:12:37.352620 | orchestrator | Friday 30 January 2026 06:12:22 +0000 (0:00:01.287) 0:24:15.648 ******** 2026-01-30 06:12:37.352625 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352631 | orchestrator | 2026-01-30 06:12:37.352637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:12:37.352643 | orchestrator | Friday 30 January 2026 06:12:23 +0000 (0:00:01.154) 0:24:16.802 ******** 2026-01-30 06:12:37.352648 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352654 | orchestrator | 2026-01-30 06:12:37.352660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:12:37.352666 | orchestrator | Friday 30 January 2026 06:12:24 +0000 (0:00:01.140) 0:24:17.942 ******** 2026-01-30 06:12:37.352672 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352677 | orchestrator | 2026-01-30 06:12:37.352683 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:12:37.352689 | orchestrator | Friday 30 January 2026 06:12:25 +0000 (0:00:01.095) 0:24:19.038 ******** 2026-01-30 06:12:37.352695 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352700 | orchestrator | 2026-01-30 06:12:37.352706 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:12:37.352712 | orchestrator | Friday 30 January 2026 06:12:26 +0000 (0:00:01.103) 0:24:20.141 ******** 2026-01-30 06:12:37.352718 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:12:37.352724 | orchestrator | 2026-01-30 06:12:37.352729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:12:37.352735 | orchestrator | Friday 30 January 2026 06:12:27 +0000 (0:00:01.100) 0:24:21.241 ******** 2026-01-30 06:12:37.352744 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:12:37.352750 | orchestrator | 2026-01-30 06:12:37.352755 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:12:37.352761 | orchestrator | Friday 30 January 2026 06:12:28 +0000 (0:00:01.104) 0:24:22.346 ******** 2026-01-30 06:12:37.352767 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-01-30 06:12:37.352772 | orchestrator | 2026-01-30 06:12:37.352778 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:12:37.352784 | orchestrator | Friday 30 January 2026 06:12:29 +0000 (0:00:01.113) 0:24:23.460 ******** 2026-01-30 06:12:37.352790 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-01-30 06:12:37.352796 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-30 06:12:37.352843 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-30 06:12:37.352850 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-30 06:12:37.352855 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-30 06:12:37.352861 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-30 06:12:37.352867 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-30 06:12:37.352872 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:12:37.352879 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:12:37.352884 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:12:37.352890 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:12:37.352896 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:12:37.352901 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:12:37.352912 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:12:37.352918 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-01-30 06:12:37.352924 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-01-30 06:12:37.352929 | orchestrator | 2026-01-30 06:12:37.352940 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:13:31.967725 | orchestrator | Friday 30 January 2026 06:12:37 +0000 (0:00:07.478) 0:24:30.938 ******** 2026-01-30 06:13:31.967879 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.967904 | orchestrator | 2026-01-30 06:13:31.967914 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:13:31.967922 | orchestrator | Friday 30 January 2026 06:12:38 +0000 (0:00:01.140) 0:24:32.079 ******** 2026-01-30 06:13:31.967931 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.967939 | orchestrator | 2026-01-30 06:13:31.967948 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:13:31.967957 | orchestrator | Friday 30 January 2026 06:12:39 +0000 (0:00:01.114) 0:24:33.193 ******** 2026-01-30 06:13:31.967967 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.967976 | orchestrator | 2026-01-30 06:13:31.967983 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:13:31.967989 | orchestrator | Friday 30 January 2026 06:12:40 +0000 (0:00:01.132) 0:24:34.326 ******** 2026-01-30 06:13:31.967994 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.967999 | orchestrator | 2026-01-30 06:13:31.968004 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:13:31.968009 | orchestrator | Friday 30 January 2026 06:12:41 +0000 (0:00:01.129) 0:24:35.455 ******** 2026-01-30 06:13:31.968014 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968019 | orchestrator | 2026-01-30 06:13:31.968024 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:13:31.968029 | orchestrator | Friday 30 January 2026 06:12:42 +0000 (0:00:01.114) 0:24:36.570 ******** 2026-01-30 06:13:31.968037 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968044 | orchestrator | 2026-01-30 06:13:31.968051 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:13:31.968066 | orchestrator | Friday 30 January 2026 06:12:44 +0000 (0:00:01.107) 0:24:37.677 ******** 2026-01-30 06:13:31.968074 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968081 | orchestrator | 2026-01-30 06:13:31.968089 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:13:31.968096 | orchestrator | Friday 30 January 2026 06:12:45 +0000 (0:00:01.126) 0:24:38.804 ******** 2026-01-30 06:13:31.968104 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968112 | orchestrator | 2026-01-30 06:13:31.968120 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:13:31.968127 | orchestrator | Friday 30 January 2026 06:12:46 +0000 (0:00:01.149) 0:24:39.953 ******** 2026-01-30 06:13:31.968136 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968144 | orchestrator | 2026-01-30 06:13:31.968152 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:13:31.968160 | orchestrator | Friday 30 January 2026 06:12:47 +0000 (0:00:01.137) 0:24:41.091 ******** 2026-01-30 06:13:31.968169 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968177 | orchestrator | 2026-01-30 06:13:31.968185 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:13:31.968190 | orchestrator | Friday 30 January 2026 06:12:48 +0000 (0:00:01.226) 0:24:42.317 ******** 2026-01-30 06:13:31.968195 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968201 | orchestrator | 2026-01-30 06:13:31.968206 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:13:31.968211 | orchestrator | Friday 30 January 2026 06:12:49 +0000 (0:00:01.116) 0:24:43.434 ******** 2026-01-30 06:13:31.968236 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968243 | orchestrator | 2026-01-30 06:13:31.968254 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:13:31.968277 | orchestrator | Friday 30 January 2026 06:12:50 +0000 (0:00:01.137) 0:24:44.571 ******** 2026-01-30 06:13:31.968286 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968293 | orchestrator | 2026-01-30 06:13:31.968300 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:13:31.968309 | orchestrator | Friday 30 January 2026 06:12:52 +0000 (0:00:01.290) 0:24:45.861 ******** 2026-01-30 06:13:31.968317 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968324 | orchestrator | 2026-01-30 06:13:31.968331 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:13:31.968339 | orchestrator | Friday 30 January 2026 06:12:53 +0000 (0:00:01.126) 0:24:46.988 ******** 2026-01-30 06:13:31.968347 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968355 | orchestrator | 2026-01-30 06:13:31.968363 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:13:31.968371 | orchestrator | Friday 30 January 2026 06:12:54 +0000 (0:00:01.207) 0:24:48.195 ******** 2026-01-30 06:13:31.968379 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968387 | orchestrator | 2026-01-30 06:13:31.968396 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:13:31.968405 | orchestrator | Friday 30 January 2026 06:12:55 +0000 (0:00:01.165) 0:24:49.361 ******** 2026-01-30 06:13:31.968413 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968422 | orchestrator | 2026-01-30 06:13:31.968429 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:13:31.968436 | orchestrator | Friday 30 January 2026 06:12:56 +0000 (0:00:01.105) 0:24:50.467 ******** 2026-01-30 06:13:31.968441 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968447 | orchestrator | 2026-01-30 06:13:31.968453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:13:31.968459 | orchestrator | Friday 30 January 2026 06:12:57 +0000 (0:00:01.121) 0:24:51.588 ******** 2026-01-30 06:13:31.968465 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968471 | orchestrator | 2026-01-30 06:13:31.968477 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:13:31.968482 | orchestrator | Friday 30 January 2026 06:12:59 +0000 (0:00:01.142) 0:24:52.731 ******** 2026-01-30 06:13:31.968488 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968494 | orchestrator | 2026-01-30 06:13:31.968515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:13:31.968521 | orchestrator | Friday 30 January 2026 06:13:00 +0000 (0:00:01.144) 0:24:53.875 ******** 2026-01-30 06:13:31.968526 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968532 | orchestrator | 2026-01-30 06:13:31.968537 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:13:31.968543 | orchestrator | Friday 30 January 2026 06:13:01 +0000 (0:00:01.131) 0:24:55.007 ******** 2026-01-30 06:13:31.968549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:13:31.968555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:13:31.968560 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:13:31.968566 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968571 | orchestrator | 2026-01-30 06:13:31.968577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:13:31.968582 | orchestrator | Friday 30 January 2026 06:13:03 +0000 (0:00:01.776) 0:24:56.783 ******** 2026-01-30 06:13:31.968588 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:13:31.968593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:13:31.968605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:13:31.968610 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968615 | orchestrator | 2026-01-30 06:13:31.968619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:13:31.968624 | orchestrator | Friday 30 January 2026 06:13:04 +0000 (0:00:01.738) 0:24:58.522 ******** 2026-01-30 06:13:31.968629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-30 06:13:31.968634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:13:31.968638 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:13:31.968643 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968648 | orchestrator | 2026-01-30 06:13:31.968653 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:13:31.968657 | orchestrator | Friday 30 January 2026 06:13:06 +0000 (0:00:01.905) 0:25:00.427 ******** 2026-01-30 06:13:31.968662 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968667 | orchestrator | 2026-01-30 06:13:31.968671 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:13:31.968676 | orchestrator | Friday 30 January 2026 06:13:07 +0000 (0:00:01.135) 0:25:01.563 ******** 2026-01-30 06:13:31.968682 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-30 06:13:31.968686 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968691 | orchestrator | 2026-01-30 06:13:31.968696 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:13:31.968701 | orchestrator | Friday 30 January 2026 06:13:09 +0000 (0:00:01.283) 0:25:02.846 ******** 2026-01-30 06:13:31.968705 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:13:31.968710 | orchestrator | 2026-01-30 06:13:31.968715 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:13:31.968720 | orchestrator | Friday 30 January 2026 06:13:11 +0000 (0:00:01.838) 0:25:04.685 ******** 2026-01-30 06:13:31.968725 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:13:31.968730 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:13:31.968735 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:13:31.968740 | orchestrator | 2026-01-30 06:13:31.968745 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:13:31.968754 | orchestrator | Friday 30 January 2026 06:13:12 +0000 (0:00:01.667) 0:25:06.353 ******** 2026-01-30 06:13:31.968759 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-01-30 06:13:31.968767 | orchestrator | 2026-01-30 06:13:31.968775 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-30 06:13:31.968782 | orchestrator | Friday 30 January 2026 06:13:14 +0000 (0:00:01.460) 0:25:07.813 ******** 2026-01-30 06:13:31.968869 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:13:31.968878 | orchestrator | 2026-01-30 06:13:31.968885 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-30 06:13:31.968893 | orchestrator | Friday 30 January 2026 06:13:15 +0000 (0:00:01.598) 0:25:09.412 ******** 2026-01-30 06:13:31.968900 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:13:31.968907 | orchestrator | 2026-01-30 06:13:31.968915 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-30 06:13:31.968923 | orchestrator | Friday 30 January 2026 06:13:16 +0000 (0:00:01.141) 0:25:10.554 ******** 2026-01-30 06:13:31.968931 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 06:13:31.968939 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 06:13:31.968946 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 06:13:31.968954 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-30 06:13:31.968962 | orchestrator | 2026-01-30 06:13:31.968969 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-30 06:13:31.968977 | orchestrator | Friday 30 January 2026 06:13:25 +0000 (0:00:08.108) 0:25:18.662 ******** 2026-01-30 06:13:31.968993 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:13:31.969001 | orchestrator | 2026-01-30 06:13:31.969008 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-30 06:13:31.969015 | orchestrator | Friday 30 January 2026 06:13:26 +0000 (0:00:01.208) 0:25:19.871 ******** 2026-01-30 06:13:31.969022 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-30 06:13:31.969029 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 06:13:31.969037 | orchestrator | 2026-01-30 06:13:31.969044 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:13:31.969051 | orchestrator | Friday 30 January 2026 06:13:29 +0000 (0:00:03.611) 0:25:23.482 ******** 2026-01-30 06:13:31.969067 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-30 06:14:29.833960 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-30 06:14:29.834122 | orchestrator | 2026-01-30 06:14:29.834137 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-30 06:14:29.834146 | orchestrator | Friday 30 January 2026 06:13:31 +0000 (0:00:02.083) 0:25:25.566 ******** 2026-01-30 06:14:29.834154 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:14:29.834161 | orchestrator | 2026-01-30 06:14:29.834168 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-30 06:14:29.834175 | orchestrator | Friday 30 January 2026 06:13:33 +0000 (0:00:01.578) 0:25:27.145 ******** 2026-01-30 06:14:29.834182 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:14:29.834189 | orchestrator | 2026-01-30 06:14:29.834196 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:14:29.834203 | orchestrator | Friday 30 January 2026 06:13:34 +0000 (0:00:01.115) 0:25:28.260 ******** 2026-01-30 06:14:29.834209 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:14:29.834219 | orchestrator | 2026-01-30 06:14:29.834226 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:14:29.834238 | orchestrator | Friday 30 January 2026 06:13:35 +0000 (0:00:01.136) 0:25:29.397 ******** 2026-01-30 06:14:29.834250 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-01-30 06:14:29.834262 | orchestrator | 2026-01-30 06:14:29.834274 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-30 06:14:29.834286 | orchestrator | Friday 30 January 2026 06:13:37 +0000 (0:00:01.459) 0:25:30.857 ******** 2026-01-30 06:14:29.834298 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:14:29.834309 | orchestrator | 2026-01-30 06:14:29.834321 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-30 06:14:29.834331 | orchestrator | Friday 30 January 2026 06:13:38 +0000 (0:00:01.210) 0:25:32.068 ******** 2026-01-30 06:14:29.834341 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:14:29.834353 | orchestrator | 2026-01-30 06:14:29.834364 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-30 06:14:29.834376 | orchestrator | Friday 30 January 2026 06:13:39 +0000 (0:00:01.130) 0:25:33.198 ******** 2026-01-30 06:14:29.834387 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-01-30 06:14:29.834398 | orchestrator | 2026-01-30 06:14:29.834409 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-30 06:14:29.834420 | orchestrator | Friday 30 January 2026 06:13:41 +0000 (0:00:01.500) 0:25:34.699 ******** 2026-01-30 06:14:29.834431 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:14:29.834442 | orchestrator | 2026-01-30 06:14:29.834453 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-30 06:14:29.834465 | orchestrator | Friday 30 January 2026 06:13:43 +0000 (0:00:02.129) 0:25:36.829 ******** 2026-01-30 06:14:29.834476 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:14:29.834489 | orchestrator | 2026-01-30 06:14:29.834502 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-30 06:14:29.834514 | orchestrator | Friday 30 January 2026 06:13:45 +0000 (0:00:02.036) 0:25:38.866 ******** 2026-01-30 06:14:29.834553 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:14:29.834565 | orchestrator | 2026-01-30 06:14:29.834578 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-30 06:14:29.834592 | orchestrator | Friday 30 January 2026 06:13:47 +0000 (0:00:02.625) 0:25:41.491 ******** 2026-01-30 06:14:29.834604 | orchestrator | changed: [testbed-node-0] 2026-01-30 06:14:29.834617 | orchestrator | 2026-01-30 06:14:29.834627 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:14:29.834640 | orchestrator | Friday 30 January 2026 06:13:52 +0000 (0:00:04.295) 0:25:45.787 ******** 2026-01-30 06:14:29.834654 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:14:29.834666 | orchestrator | 2026-01-30 06:14:29.834678 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-01-30 06:14:29.834690 | orchestrator | 2026-01-30 06:14:29.834751 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:14:29.834764 | orchestrator | Friday 30 January 2026 06:13:53 +0000 (0:00:01.010) 0:25:46.797 ******** 2026-01-30 06:14:29.834804 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:14:29.834816 | orchestrator | 2026-01-30 06:14:29.834827 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-01-30 06:14:29.834837 | orchestrator | Friday 30 January 2026 06:14:05 +0000 (0:00:12.751) 0:25:59.549 ******** 2026-01-30 06:14:29.834848 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:14:29.834858 | orchestrator | 2026-01-30 06:14:29.834867 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:14:29.834877 | orchestrator | Friday 30 January 2026 06:14:08 +0000 (0:00:02.353) 0:26:01.902 ******** 2026-01-30 06:14:29.834887 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-01-30 06:14:29.834896 | orchestrator | 2026-01-30 06:14:29.834906 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:14:29.834917 | orchestrator | Friday 30 January 2026 06:14:09 +0000 (0:00:01.100) 0:26:03.003 ******** 2026-01-30 06:14:29.834927 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.834937 | orchestrator | 2026-01-30 06:14:29.834947 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:14:29.834957 | orchestrator | Friday 30 January 2026 06:14:10 +0000 (0:00:01.506) 0:26:04.509 ******** 2026-01-30 06:14:29.834968 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.834978 | orchestrator | 2026-01-30 06:14:29.834988 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:14:29.834999 | orchestrator | Friday 30 January 2026 06:14:12 +0000 (0:00:01.151) 0:26:05.661 ******** 2026-01-30 06:14:29.835008 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835019 | orchestrator | 2026-01-30 06:14:29.835029 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:14:29.835038 | orchestrator | Friday 30 January 2026 06:14:13 +0000 (0:00:01.628) 0:26:07.290 ******** 2026-01-30 06:14:29.835049 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835059 | orchestrator | 2026-01-30 06:14:29.835092 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:14:29.835104 | orchestrator | Friday 30 January 2026 06:14:14 +0000 (0:00:01.110) 0:26:08.401 ******** 2026-01-30 06:14:29.835114 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835124 | orchestrator | 2026-01-30 06:14:29.835134 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:14:29.835144 | orchestrator | Friday 30 January 2026 06:14:15 +0000 (0:00:01.150) 0:26:09.552 ******** 2026-01-30 06:14:29.835154 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835164 | orchestrator | 2026-01-30 06:14:29.835175 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:14:29.835188 | orchestrator | Friday 30 January 2026 06:14:17 +0000 (0:00:01.141) 0:26:10.694 ******** 2026-01-30 06:14:29.835198 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:29.835209 | orchestrator | 2026-01-30 06:14:29.835233 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:14:29.835244 | orchestrator | Friday 30 January 2026 06:14:18 +0000 (0:00:01.179) 0:26:11.873 ******** 2026-01-30 06:14:29.835255 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835265 | orchestrator | 2026-01-30 06:14:29.835275 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:14:29.835286 | orchestrator | Friday 30 January 2026 06:14:19 +0000 (0:00:01.108) 0:26:12.982 ******** 2026-01-30 06:14:29.835297 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:14:29.835307 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:14:29.835319 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:14:29.835330 | orchestrator | 2026-01-30 06:14:29.835340 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:14:29.835350 | orchestrator | Friday 30 January 2026 06:14:21 +0000 (0:00:01.659) 0:26:14.641 ******** 2026-01-30 06:14:29.835361 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:29.835373 | orchestrator | 2026-01-30 06:14:29.835384 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:14:29.835394 | orchestrator | Friday 30 January 2026 06:14:22 +0000 (0:00:01.250) 0:26:15.891 ******** 2026-01-30 06:14:29.835406 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:14:29.835417 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:14:29.835428 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:14:29.835438 | orchestrator | 2026-01-30 06:14:29.835450 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:14:29.835461 | orchestrator | Friday 30 January 2026 06:14:25 +0000 (0:00:02.985) 0:26:18.877 ******** 2026-01-30 06:14:29.835472 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 06:14:29.835483 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 06:14:29.835492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 06:14:29.835503 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:29.835514 | orchestrator | 2026-01-30 06:14:29.835525 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:14:29.835536 | orchestrator | Friday 30 January 2026 06:14:26 +0000 (0:00:01.410) 0:26:20.288 ******** 2026-01-30 06:14:29.835549 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:14:29.835572 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:14:29.835584 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:14:29.835596 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:29.835607 | orchestrator | 2026-01-30 06:14:29.835619 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:14:29.835630 | orchestrator | Friday 30 January 2026 06:14:28 +0000 (0:00:01.983) 0:26:22.271 ******** 2026-01-30 06:14:29.835643 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:29.835667 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:29.835694 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:49.035903 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036000 | orchestrator | 2026-01-30 06:14:49.036010 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:14:49.036018 | orchestrator | Friday 30 January 2026 06:14:29 +0000 (0:00:01.161) 0:26:23.433 ******** 2026-01-30 06:14:49.036028 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:14:22.861269', 'end': '2026-01-30 06:14:22.920134', 'delta': '0:00:00.058865', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:14:49.036039 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:14:23.474786', 'end': '2026-01-30 06:14:23.541780', 'delta': '0:00:00.066994', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:14:49.036059 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:14:24.068725', 'end': '2026-01-30 06:14:24.106659', 'delta': '0:00:00.037934', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:14:49.036066 | orchestrator | 2026-01-30 06:14:49.036073 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:14:49.036080 | orchestrator | Friday 30 January 2026 06:14:31 +0000 (0:00:01.199) 0:26:24.632 ******** 2026-01-30 06:14:49.036086 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:49.036093 | orchestrator | 2026-01-30 06:14:49.036100 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:14:49.036122 | orchestrator | Friday 30 January 2026 06:14:32 +0000 (0:00:01.228) 0:26:25.861 ******** 2026-01-30 06:14:49.036128 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036135 | orchestrator | 2026-01-30 06:14:49.036141 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:14:49.036147 | orchestrator | Friday 30 January 2026 06:14:33 +0000 (0:00:01.236) 0:26:27.097 ******** 2026-01-30 06:14:49.036153 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:49.036159 | orchestrator | 2026-01-30 06:14:49.036165 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:14:49.036171 | orchestrator | Friday 30 January 2026 06:14:34 +0000 (0:00:01.138) 0:26:28.236 ******** 2026-01-30 06:14:49.036177 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:14:49.036184 | orchestrator | 2026-01-30 06:14:49.036190 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:14:49.036200 | orchestrator | Friday 30 January 2026 06:14:36 +0000 (0:00:01.961) 0:26:30.198 ******** 2026-01-30 06:14:49.036210 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:14:49.036220 | orchestrator | 2026-01-30 06:14:49.036231 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:14:49.036240 | orchestrator | Friday 30 January 2026 06:14:37 +0000 (0:00:01.152) 0:26:31.350 ******** 2026-01-30 06:14:49.036249 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036260 | orchestrator | 2026-01-30 06:14:49.036269 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:14:49.036278 | orchestrator | Friday 30 January 2026 06:14:38 +0000 (0:00:01.132) 0:26:32.482 ******** 2026-01-30 06:14:49.036288 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036298 | orchestrator | 2026-01-30 06:14:49.036307 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:14:49.036315 | orchestrator | Friday 30 January 2026 06:14:40 +0000 (0:00:01.202) 0:26:33.685 ******** 2026-01-30 06:14:49.036324 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036334 | orchestrator | 2026-01-30 06:14:49.036358 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:14:49.036369 | orchestrator | Friday 30 January 2026 06:14:41 +0000 (0:00:01.107) 0:26:34.793 ******** 2026-01-30 06:14:49.036378 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036388 | orchestrator | 2026-01-30 06:14:49.036398 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:14:49.036407 | orchestrator | Friday 30 January 2026 06:14:42 +0000 (0:00:01.104) 0:26:35.897 ******** 2026-01-30 06:14:49.036418 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036428 | orchestrator | 2026-01-30 06:14:49.036438 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:14:49.036449 | orchestrator | Friday 30 January 2026 06:14:43 +0000 (0:00:01.108) 0:26:37.006 ******** 2026-01-30 06:14:49.036460 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036472 | orchestrator | 2026-01-30 06:14:49.036482 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:14:49.036491 | orchestrator | Friday 30 January 2026 06:14:44 +0000 (0:00:01.114) 0:26:38.120 ******** 2026-01-30 06:14:49.036501 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036510 | orchestrator | 2026-01-30 06:14:49.036520 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:14:49.036530 | orchestrator | Friday 30 January 2026 06:14:45 +0000 (0:00:01.111) 0:26:39.232 ******** 2026-01-30 06:14:49.036540 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036549 | orchestrator | 2026-01-30 06:14:49.036558 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:14:49.036569 | orchestrator | Friday 30 January 2026 06:14:46 +0000 (0:00:01.085) 0:26:40.317 ******** 2026-01-30 06:14:49.036580 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:49.036589 | orchestrator | 2026-01-30 06:14:49.036598 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:14:49.036622 | orchestrator | Friday 30 January 2026 06:14:47 +0000 (0:00:01.092) 0:26:41.410 ******** 2026-01-30 06:14:49.036634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:49.036654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:49.036665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:49.036677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:14:49.036689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:49.036699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:49.036718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:50.206396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:14:50.206526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:50.206545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:14:50.206558 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:14:50.206571 | orchestrator | 2026-01-30 06:14:50.206583 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:14:50.206594 | orchestrator | Friday 30 January 2026 06:14:49 +0000 (0:00:01.219) 0:26:42.629 ******** 2026-01-30 06:14:50.206608 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206640 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206661 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206674 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206693 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206705 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206716 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:14:50.206745 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '668a7bb6', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1', 'scsi-SQEMU_QEMU_HARDDISK_668a7bb6-1d9a-43cc-b5c1-9e85d024a763-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:15:24.582139 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:15:24.582274 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:15:24.582290 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582302 | orchestrator | 2026-01-30 06:15:24.582313 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:15:24.582324 | orchestrator | Friday 30 January 2026 06:14:50 +0000 (0:00:01.177) 0:26:43.807 ******** 2026-01-30 06:15:24.582334 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.582345 | orchestrator | 2026-01-30 06:15:24.582355 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:15:24.582365 | orchestrator | Friday 30 January 2026 06:14:51 +0000 (0:00:01.523) 0:26:45.331 ******** 2026-01-30 06:15:24.582374 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.582384 | orchestrator | 2026-01-30 06:15:24.582393 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:15:24.582403 | orchestrator | Friday 30 January 2026 06:14:52 +0000 (0:00:01.109) 0:26:46.441 ******** 2026-01-30 06:15:24.582413 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.582422 | orchestrator | 2026-01-30 06:15:24.582432 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:15:24.582544 | orchestrator | Friday 30 January 2026 06:14:54 +0000 (0:00:01.575) 0:26:48.017 ******** 2026-01-30 06:15:24.582557 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582567 | orchestrator | 2026-01-30 06:15:24.582577 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:15:24.582587 | orchestrator | Friday 30 January 2026 06:14:55 +0000 (0:00:01.155) 0:26:49.172 ******** 2026-01-30 06:15:24.582596 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582605 | orchestrator | 2026-01-30 06:15:24.582615 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:15:24.582625 | orchestrator | Friday 30 January 2026 06:14:56 +0000 (0:00:01.196) 0:26:50.369 ******** 2026-01-30 06:15:24.582634 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582644 | orchestrator | 2026-01-30 06:15:24.582653 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:15:24.582663 | orchestrator | Friday 30 January 2026 06:14:57 +0000 (0:00:01.124) 0:26:51.493 ******** 2026-01-30 06:15:24.582672 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-30 06:15:24.582682 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:15:24.582691 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-30 06:15:24.582701 | orchestrator | 2026-01-30 06:15:24.582710 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:15:24.582720 | orchestrator | Friday 30 January 2026 06:14:59 +0000 (0:00:01.723) 0:26:53.216 ******** 2026-01-30 06:15:24.582729 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-30 06:15:24.582740 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-30 06:15:24.582750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-30 06:15:24.582780 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582790 | orchestrator | 2026-01-30 06:15:24.582800 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:15:24.582809 | orchestrator | Friday 30 January 2026 06:15:00 +0000 (0:00:01.184) 0:26:54.400 ******** 2026-01-30 06:15:24.582819 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.582829 | orchestrator | 2026-01-30 06:15:24.582838 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:15:24.582848 | orchestrator | Friday 30 January 2026 06:15:01 +0000 (0:00:01.124) 0:26:55.525 ******** 2026-01-30 06:15:24.582857 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:15:24.582868 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:15:24.582877 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:15:24.582887 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:15:24.582897 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:15:24.582921 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:15:24.582948 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:15:24.582959 | orchestrator | 2026-01-30 06:15:24.582968 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:15:24.582978 | orchestrator | Friday 30 January 2026 06:15:04 +0000 (0:00:02.156) 0:26:57.682 ******** 2026-01-30 06:15:24.582988 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:15:24.582997 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:15:24.583007 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:15:24.583016 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:15:24.583026 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:15:24.583044 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:15:24.583054 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:15:24.583063 | orchestrator | 2026-01-30 06:15:24.583073 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:15:24.583082 | orchestrator | Friday 30 January 2026 06:15:06 +0000 (0:00:02.337) 0:27:00.020 ******** 2026-01-30 06:15:24.583091 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-01-30 06:15:24.583102 | orchestrator | 2026-01-30 06:15:24.583111 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:15:24.583121 | orchestrator | Friday 30 January 2026 06:15:07 +0000 (0:00:01.218) 0:27:01.238 ******** 2026-01-30 06:15:24.583131 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-01-30 06:15:24.583140 | orchestrator | 2026-01-30 06:15:24.583150 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:15:24.583159 | orchestrator | Friday 30 January 2026 06:15:08 +0000 (0:00:01.130) 0:27:02.369 ******** 2026-01-30 06:15:24.583168 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.583178 | orchestrator | 2026-01-30 06:15:24.583188 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:15:24.583197 | orchestrator | Friday 30 January 2026 06:15:10 +0000 (0:00:01.603) 0:27:03.973 ******** 2026-01-30 06:15:24.583207 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583216 | orchestrator | 2026-01-30 06:15:24.583225 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:15:24.583235 | orchestrator | Friday 30 January 2026 06:15:11 +0000 (0:00:01.110) 0:27:05.084 ******** 2026-01-30 06:15:24.583254 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583264 | orchestrator | 2026-01-30 06:15:24.583273 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:15:24.583283 | orchestrator | Friday 30 January 2026 06:15:12 +0000 (0:00:01.123) 0:27:06.207 ******** 2026-01-30 06:15:24.583292 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583302 | orchestrator | 2026-01-30 06:15:24.583311 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:15:24.583321 | orchestrator | Friday 30 January 2026 06:15:13 +0000 (0:00:01.125) 0:27:07.333 ******** 2026-01-30 06:15:24.583330 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.583340 | orchestrator | 2026-01-30 06:15:24.583349 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:15:24.583359 | orchestrator | Friday 30 January 2026 06:15:15 +0000 (0:00:01.656) 0:27:08.989 ******** 2026-01-30 06:15:24.583368 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583378 | orchestrator | 2026-01-30 06:15:24.583387 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:15:24.583396 | orchestrator | Friday 30 January 2026 06:15:16 +0000 (0:00:01.116) 0:27:10.105 ******** 2026-01-30 06:15:24.583406 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583415 | orchestrator | 2026-01-30 06:15:24.583425 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:15:24.583434 | orchestrator | Friday 30 January 2026 06:15:17 +0000 (0:00:01.109) 0:27:11.214 ******** 2026-01-30 06:15:24.583444 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.583453 | orchestrator | 2026-01-30 06:15:24.583463 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:15:24.583472 | orchestrator | Friday 30 January 2026 06:15:19 +0000 (0:00:01.588) 0:27:12.803 ******** 2026-01-30 06:15:24.583482 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.583491 | orchestrator | 2026-01-30 06:15:24.583500 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:15:24.583510 | orchestrator | Friday 30 January 2026 06:15:20 +0000 (0:00:01.537) 0:27:14.341 ******** 2026-01-30 06:15:24.583519 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583539 | orchestrator | 2026-01-30 06:15:24.583549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:15:24.583558 | orchestrator | Friday 30 January 2026 06:15:21 +0000 (0:00:00.785) 0:27:15.126 ******** 2026-01-30 06:15:24.583568 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:15:24.583577 | orchestrator | 2026-01-30 06:15:24.583586 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:15:24.583596 | orchestrator | Friday 30 January 2026 06:15:22 +0000 (0:00:00.784) 0:27:15.910 ******** 2026-01-30 06:15:24.583605 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583615 | orchestrator | 2026-01-30 06:15:24.583625 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:15:24.583634 | orchestrator | Friday 30 January 2026 06:15:23 +0000 (0:00:00.770) 0:27:16.680 ******** 2026-01-30 06:15:24.583644 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:15:24.583653 | orchestrator | 2026-01-30 06:15:24.583663 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:15:24.583677 | orchestrator | Friday 30 January 2026 06:15:23 +0000 (0:00:00.742) 0:27:17.423 ******** 2026-01-30 06:15:24.583693 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509384 | orchestrator | 2026-01-30 06:16:03.509502 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:16:03.509518 | orchestrator | Friday 30 January 2026 06:15:24 +0000 (0:00:00.756) 0:27:18.180 ******** 2026-01-30 06:16:03.509529 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509540 | orchestrator | 2026-01-30 06:16:03.509550 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:16:03.509560 | orchestrator | Friday 30 January 2026 06:15:25 +0000 (0:00:00.749) 0:27:18.930 ******** 2026-01-30 06:16:03.509570 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509580 | orchestrator | 2026-01-30 06:16:03.509590 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:16:03.509600 | orchestrator | Friday 30 January 2026 06:15:26 +0000 (0:00:00.728) 0:27:19.659 ******** 2026-01-30 06:16:03.509610 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.509620 | orchestrator | 2026-01-30 06:16:03.509630 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:16:03.509640 | orchestrator | Friday 30 January 2026 06:15:26 +0000 (0:00:00.679) 0:27:20.338 ******** 2026-01-30 06:16:03.509650 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.509659 | orchestrator | 2026-01-30 06:16:03.509669 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:16:03.509679 | orchestrator | Friday 30 January 2026 06:15:27 +0000 (0:00:00.640) 0:27:20.979 ******** 2026-01-30 06:16:03.509688 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.509698 | orchestrator | 2026-01-30 06:16:03.509708 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:16:03.509717 | orchestrator | Friday 30 January 2026 06:15:28 +0000 (0:00:00.747) 0:27:21.726 ******** 2026-01-30 06:16:03.509727 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509737 | orchestrator | 2026-01-30 06:16:03.509842 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:16:03.509860 | orchestrator | Friday 30 January 2026 06:15:28 +0000 (0:00:00.737) 0:27:22.464 ******** 2026-01-30 06:16:03.509871 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509881 | orchestrator | 2026-01-30 06:16:03.509891 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:16:03.509901 | orchestrator | Friday 30 January 2026 06:15:29 +0000 (0:00:00.786) 0:27:23.251 ******** 2026-01-30 06:16:03.509910 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509922 | orchestrator | 2026-01-30 06:16:03.509933 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:16:03.509945 | orchestrator | Friday 30 January 2026 06:15:30 +0000 (0:00:00.732) 0:27:23.983 ******** 2026-01-30 06:16:03.509956 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.509990 | orchestrator | 2026-01-30 06:16:03.510001 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:16:03.510013 | orchestrator | Friday 30 January 2026 06:15:31 +0000 (0:00:00.750) 0:27:24.734 ******** 2026-01-30 06:16:03.510092 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510104 | orchestrator | 2026-01-30 06:16:03.510116 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:16:03.510127 | orchestrator | Friday 30 January 2026 06:15:31 +0000 (0:00:00.791) 0:27:25.526 ******** 2026-01-30 06:16:03.510138 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510149 | orchestrator | 2026-01-30 06:16:03.510161 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:16:03.510173 | orchestrator | Friday 30 January 2026 06:15:32 +0000 (0:00:00.757) 0:27:26.284 ******** 2026-01-30 06:16:03.510184 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510195 | orchestrator | 2026-01-30 06:16:03.510206 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:16:03.510218 | orchestrator | Friday 30 January 2026 06:15:33 +0000 (0:00:00.756) 0:27:27.040 ******** 2026-01-30 06:16:03.510229 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510244 | orchestrator | 2026-01-30 06:16:03.510262 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:16:03.510275 | orchestrator | Friday 30 January 2026 06:15:34 +0000 (0:00:00.729) 0:27:27.770 ******** 2026-01-30 06:16:03.510285 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510294 | orchestrator | 2026-01-30 06:16:03.510304 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:16:03.510313 | orchestrator | Friday 30 January 2026 06:15:34 +0000 (0:00:00.656) 0:27:28.427 ******** 2026-01-30 06:16:03.510323 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510332 | orchestrator | 2026-01-30 06:16:03.510342 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:16:03.510351 | orchestrator | Friday 30 January 2026 06:15:35 +0000 (0:00:00.629) 0:27:29.056 ******** 2026-01-30 06:16:03.510360 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510370 | orchestrator | 2026-01-30 06:16:03.510379 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:16:03.510389 | orchestrator | Friday 30 January 2026 06:15:36 +0000 (0:00:00.734) 0:27:29.791 ******** 2026-01-30 06:16:03.510398 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510407 | orchestrator | 2026-01-30 06:16:03.510417 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:16:03.510426 | orchestrator | Friday 30 January 2026 06:15:36 +0000 (0:00:00.755) 0:27:30.546 ******** 2026-01-30 06:16:03.510436 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.510445 | orchestrator | 2026-01-30 06:16:03.510454 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:16:03.510464 | orchestrator | Friday 30 January 2026 06:15:38 +0000 (0:00:01.692) 0:27:32.239 ******** 2026-01-30 06:16:03.510473 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.510483 | orchestrator | 2026-01-30 06:16:03.510493 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:16:03.510502 | orchestrator | Friday 30 January 2026 06:15:40 +0000 (0:00:02.198) 0:27:34.438 ******** 2026-01-30 06:16:03.510525 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-01-30 06:16:03.510537 | orchestrator | 2026-01-30 06:16:03.510564 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:16:03.510575 | orchestrator | Friday 30 January 2026 06:15:41 +0000 (0:00:01.086) 0:27:35.524 ******** 2026-01-30 06:16:03.510585 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510594 | orchestrator | 2026-01-30 06:16:03.510604 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:16:03.510613 | orchestrator | Friday 30 January 2026 06:15:43 +0000 (0:00:01.092) 0:27:36.617 ******** 2026-01-30 06:16:03.510633 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510643 | orchestrator | 2026-01-30 06:16:03.510652 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:16:03.510662 | orchestrator | Friday 30 January 2026 06:15:44 +0000 (0:00:01.081) 0:27:37.699 ******** 2026-01-30 06:16:03.510671 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:16:03.510681 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:16:03.510690 | orchestrator | 2026-01-30 06:16:03.510699 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:16:03.510709 | orchestrator | Friday 30 January 2026 06:15:46 +0000 (0:00:01.931) 0:27:39.630 ******** 2026-01-30 06:16:03.510718 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.510728 | orchestrator | 2026-01-30 06:16:03.510737 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:16:03.510842 | orchestrator | Friday 30 January 2026 06:15:47 +0000 (0:00:01.461) 0:27:41.092 ******** 2026-01-30 06:16:03.510853 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510863 | orchestrator | 2026-01-30 06:16:03.510872 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:16:03.510882 | orchestrator | Friday 30 January 2026 06:15:48 +0000 (0:00:01.081) 0:27:42.174 ******** 2026-01-30 06:16:03.510891 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510901 | orchestrator | 2026-01-30 06:16:03.510910 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:16:03.510920 | orchestrator | Friday 30 January 2026 06:15:49 +0000 (0:00:00.773) 0:27:42.947 ******** 2026-01-30 06:16:03.510929 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.510939 | orchestrator | 2026-01-30 06:16:03.510948 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:16:03.510958 | orchestrator | Friday 30 January 2026 06:15:50 +0000 (0:00:00.752) 0:27:43.700 ******** 2026-01-30 06:16:03.510967 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-01-30 06:16:03.510977 | orchestrator | 2026-01-30 06:16:03.510986 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:16:03.510996 | orchestrator | Friday 30 January 2026 06:15:51 +0000 (0:00:01.093) 0:27:44.793 ******** 2026-01-30 06:16:03.511005 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.511015 | orchestrator | 2026-01-30 06:16:03.511024 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:16:03.511034 | orchestrator | Friday 30 January 2026 06:15:52 +0000 (0:00:01.602) 0:27:46.396 ******** 2026-01-30 06:16:03.511044 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:16:03.511053 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:16:03.511063 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:16:03.511072 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511082 | orchestrator | 2026-01-30 06:16:03.511092 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:16:03.511101 | orchestrator | Friday 30 January 2026 06:15:53 +0000 (0:00:01.106) 0:27:47.502 ******** 2026-01-30 06:16:03.511110 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511120 | orchestrator | 2026-01-30 06:16:03.511129 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:16:03.511139 | orchestrator | Friday 30 January 2026 06:15:54 +0000 (0:00:01.091) 0:27:48.594 ******** 2026-01-30 06:16:03.511150 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511166 | orchestrator | 2026-01-30 06:16:03.511182 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:16:03.511198 | orchestrator | Friday 30 January 2026 06:15:56 +0000 (0:00:01.216) 0:27:49.810 ******** 2026-01-30 06:16:03.511223 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511233 | orchestrator | 2026-01-30 06:16:03.511243 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:16:03.511253 | orchestrator | Friday 30 January 2026 06:15:57 +0000 (0:00:01.131) 0:27:50.942 ******** 2026-01-30 06:16:03.511264 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511280 | orchestrator | 2026-01-30 06:16:03.511297 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:16:03.511311 | orchestrator | Friday 30 January 2026 06:15:58 +0000 (0:00:01.132) 0:27:52.074 ******** 2026-01-30 06:16:03.511321 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:03.511331 | orchestrator | 2026-01-30 06:16:03.511340 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:16:03.511350 | orchestrator | Friday 30 January 2026 06:15:59 +0000 (0:00:00.784) 0:27:52.859 ******** 2026-01-30 06:16:03.511359 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.511369 | orchestrator | 2026-01-30 06:16:03.511378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:16:03.511388 | orchestrator | Friday 30 January 2026 06:16:01 +0000 (0:00:02.346) 0:27:55.206 ******** 2026-01-30 06:16:03.511397 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:03.511407 | orchestrator | 2026-01-30 06:16:03.511416 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:16:03.511426 | orchestrator | Friday 30 January 2026 06:16:02 +0000 (0:00:00.779) 0:27:55.986 ******** 2026-01-30 06:16:03.511443 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-01-30 06:16:03.511453 | orchestrator | 2026-01-30 06:16:03.511472 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:16:40.532429 | orchestrator | Friday 30 January 2026 06:16:03 +0000 (0:00:01.119) 0:27:57.106 ******** 2026-01-30 06:16:40.532549 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532564 | orchestrator | 2026-01-30 06:16:40.532578 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:16:40.532590 | orchestrator | Friday 30 January 2026 06:16:04 +0000 (0:00:01.160) 0:27:58.267 ******** 2026-01-30 06:16:40.532601 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532612 | orchestrator | 2026-01-30 06:16:40.532623 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:16:40.532634 | orchestrator | Friday 30 January 2026 06:16:05 +0000 (0:00:01.155) 0:27:59.422 ******** 2026-01-30 06:16:40.532645 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532656 | orchestrator | 2026-01-30 06:16:40.532666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:16:40.532677 | orchestrator | Friday 30 January 2026 06:16:06 +0000 (0:00:01.169) 0:28:00.592 ******** 2026-01-30 06:16:40.532688 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532698 | orchestrator | 2026-01-30 06:16:40.532709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:16:40.532720 | orchestrator | Friday 30 January 2026 06:16:08 +0000 (0:00:01.201) 0:28:01.794 ******** 2026-01-30 06:16:40.532731 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532798 | orchestrator | 2026-01-30 06:16:40.532809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:16:40.532820 | orchestrator | Friday 30 January 2026 06:16:09 +0000 (0:00:01.129) 0:28:02.923 ******** 2026-01-30 06:16:40.532831 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532842 | orchestrator | 2026-01-30 06:16:40.532852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:16:40.532863 | orchestrator | Friday 30 January 2026 06:16:10 +0000 (0:00:01.167) 0:28:04.091 ******** 2026-01-30 06:16:40.532874 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532885 | orchestrator | 2026-01-30 06:16:40.532895 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:16:40.532906 | orchestrator | Friday 30 January 2026 06:16:11 +0000 (0:00:01.122) 0:28:05.213 ******** 2026-01-30 06:16:40.532942 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.532953 | orchestrator | 2026-01-30 06:16:40.532964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:16:40.532975 | orchestrator | Friday 30 January 2026 06:16:12 +0000 (0:00:01.164) 0:28:06.378 ******** 2026-01-30 06:16:40.532986 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:16:40.532998 | orchestrator | 2026-01-30 06:16:40.533008 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:16:40.533019 | orchestrator | Friday 30 January 2026 06:16:13 +0000 (0:00:00.811) 0:28:07.189 ******** 2026-01-30 06:16:40.533030 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-01-30 06:16:40.533042 | orchestrator | 2026-01-30 06:16:40.533053 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:16:40.533063 | orchestrator | Friday 30 January 2026 06:16:14 +0000 (0:00:01.113) 0:28:08.303 ******** 2026-01-30 06:16:40.533074 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-01-30 06:16:40.533085 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-30 06:16:40.533096 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-30 06:16:40.533107 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-30 06:16:40.533117 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-30 06:16:40.533128 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-30 06:16:40.533139 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-30 06:16:40.533149 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:16:40.533160 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:16:40.533171 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:16:40.533182 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:16:40.533192 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:16:40.533203 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:16:40.533214 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:16:40.533225 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-01-30 06:16:40.533236 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-01-30 06:16:40.533246 | orchestrator | 2026-01-30 06:16:40.533257 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:16:40.533267 | orchestrator | Friday 30 January 2026 06:16:21 +0000 (0:00:07.110) 0:28:15.414 ******** 2026-01-30 06:16:40.533303 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533315 | orchestrator | 2026-01-30 06:16:40.533325 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:16:40.533336 | orchestrator | Friday 30 January 2026 06:16:22 +0000 (0:00:00.769) 0:28:16.183 ******** 2026-01-30 06:16:40.533347 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533358 | orchestrator | 2026-01-30 06:16:40.533369 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:16:40.533380 | orchestrator | Friday 30 January 2026 06:16:23 +0000 (0:00:00.763) 0:28:16.947 ******** 2026-01-30 06:16:40.533390 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533401 | orchestrator | 2026-01-30 06:16:40.533412 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:16:40.533438 | orchestrator | Friday 30 January 2026 06:16:24 +0000 (0:00:00.758) 0:28:17.705 ******** 2026-01-30 06:16:40.533449 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533460 | orchestrator | 2026-01-30 06:16:40.533471 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:16:40.533499 | orchestrator | Friday 30 January 2026 06:16:24 +0000 (0:00:00.750) 0:28:18.456 ******** 2026-01-30 06:16:40.533519 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533530 | orchestrator | 2026-01-30 06:16:40.533541 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:16:40.533552 | orchestrator | Friday 30 January 2026 06:16:25 +0000 (0:00:00.769) 0:28:19.226 ******** 2026-01-30 06:16:40.533562 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533573 | orchestrator | 2026-01-30 06:16:40.533584 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:16:40.533595 | orchestrator | Friday 30 January 2026 06:16:26 +0000 (0:00:00.820) 0:28:20.046 ******** 2026-01-30 06:16:40.533606 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533616 | orchestrator | 2026-01-30 06:16:40.533627 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:16:40.533638 | orchestrator | Friday 30 January 2026 06:16:27 +0000 (0:00:00.765) 0:28:20.812 ******** 2026-01-30 06:16:40.533649 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533659 | orchestrator | 2026-01-30 06:16:40.533670 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:16:40.533681 | orchestrator | Friday 30 January 2026 06:16:28 +0000 (0:00:00.841) 0:28:21.653 ******** 2026-01-30 06:16:40.533692 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533703 | orchestrator | 2026-01-30 06:16:40.533713 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:16:40.533724 | orchestrator | Friday 30 January 2026 06:16:28 +0000 (0:00:00.755) 0:28:22.409 ******** 2026-01-30 06:16:40.533804 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533828 | orchestrator | 2026-01-30 06:16:40.533846 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:16:40.533865 | orchestrator | Friday 30 January 2026 06:16:29 +0000 (0:00:00.770) 0:28:23.180 ******** 2026-01-30 06:16:40.533876 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533887 | orchestrator | 2026-01-30 06:16:40.533898 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:16:40.533908 | orchestrator | Friday 30 January 2026 06:16:30 +0000 (0:00:00.759) 0:28:23.939 ******** 2026-01-30 06:16:40.533919 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533929 | orchestrator | 2026-01-30 06:16:40.533940 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:16:40.533951 | orchestrator | Friday 30 January 2026 06:16:31 +0000 (0:00:00.781) 0:28:24.721 ******** 2026-01-30 06:16:40.533962 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.533972 | orchestrator | 2026-01-30 06:16:40.533983 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:16:40.533993 | orchestrator | Friday 30 January 2026 06:16:31 +0000 (0:00:00.862) 0:28:25.583 ******** 2026-01-30 06:16:40.534004 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534015 | orchestrator | 2026-01-30 06:16:40.534092 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:16:40.534103 | orchestrator | Friday 30 January 2026 06:16:32 +0000 (0:00:00.764) 0:28:26.348 ******** 2026-01-30 06:16:40.534113 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534124 | orchestrator | 2026-01-30 06:16:40.534135 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:16:40.534145 | orchestrator | Friday 30 January 2026 06:16:33 +0000 (0:00:00.868) 0:28:27.217 ******** 2026-01-30 06:16:40.534156 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534166 | orchestrator | 2026-01-30 06:16:40.534177 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:16:40.534188 | orchestrator | Friday 30 January 2026 06:16:34 +0000 (0:00:00.771) 0:28:27.989 ******** 2026-01-30 06:16:40.534198 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534209 | orchestrator | 2026-01-30 06:16:40.534220 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:16:40.534241 | orchestrator | Friday 30 January 2026 06:16:35 +0000 (0:00:00.776) 0:28:28.765 ******** 2026-01-30 06:16:40.534252 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534263 | orchestrator | 2026-01-30 06:16:40.534273 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:16:40.534284 | orchestrator | Friday 30 January 2026 06:16:35 +0000 (0:00:00.753) 0:28:29.519 ******** 2026-01-30 06:16:40.534294 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534305 | orchestrator | 2026-01-30 06:16:40.534315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:16:40.534326 | orchestrator | Friday 30 January 2026 06:16:36 +0000 (0:00:00.765) 0:28:30.285 ******** 2026-01-30 06:16:40.534337 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534347 | orchestrator | 2026-01-30 06:16:40.534358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:16:40.534368 | orchestrator | Friday 30 January 2026 06:16:37 +0000 (0:00:00.879) 0:28:31.165 ******** 2026-01-30 06:16:40.534379 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534390 | orchestrator | 2026-01-30 06:16:40.534400 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:16:40.534411 | orchestrator | Friday 30 January 2026 06:16:38 +0000 (0:00:00.860) 0:28:32.025 ******** 2026-01-30 06:16:40.534421 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:16:40.534432 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:16:40.534443 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:16:40.534453 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:16:40.534464 | orchestrator | 2026-01-30 06:16:40.534475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:16:40.534492 | orchestrator | Friday 30 January 2026 06:16:39 +0000 (0:00:01.047) 0:28:33.074 ******** 2026-01-30 06:16:40.534503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:16:40.534523 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:17:38.872310 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:17:38.872450 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872459 | orchestrator | 2026-01-30 06:17:38.872467 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:17:38.872475 | orchestrator | Friday 30 January 2026 06:16:40 +0000 (0:00:01.055) 0:28:34.129 ******** 2026-01-30 06:17:38.872481 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-30 06:17:38.872487 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-30 06:17:38.872492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-30 06:17:38.872498 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872503 | orchestrator | 2026-01-30 06:17:38.872509 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:17:38.872515 | orchestrator | Friday 30 January 2026 06:16:41 +0000 (0:00:01.005) 0:28:35.134 ******** 2026-01-30 06:17:38.872520 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872526 | orchestrator | 2026-01-30 06:17:38.872531 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:17:38.872537 | orchestrator | Friday 30 January 2026 06:16:42 +0000 (0:00:00.743) 0:28:35.877 ******** 2026-01-30 06:17:38.872543 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-30 06:17:38.872549 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872555 | orchestrator | 2026-01-30 06:17:38.872561 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:17:38.872567 | orchestrator | Friday 30 January 2026 06:16:43 +0000 (0:00:00.749) 0:28:36.627 ******** 2026-01-30 06:17:38.872572 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.872578 | orchestrator | 2026-01-30 06:17:38.872583 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:17:38.872614 | orchestrator | Friday 30 January 2026 06:16:44 +0000 (0:00:01.314) 0:28:37.942 ******** 2026-01-30 06:17:38.872620 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:17:38.872626 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-30 06:17:38.872632 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:17:38.872637 | orchestrator | 2026-01-30 06:17:38.872642 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:17:38.872648 | orchestrator | Friday 30 January 2026 06:16:45 +0000 (0:00:01.655) 0:28:39.597 ******** 2026-01-30 06:17:38.872653 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-01-30 06:17:38.872658 | orchestrator | 2026-01-30 06:17:38.872664 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-30 06:17:38.872669 | orchestrator | Friday 30 January 2026 06:16:47 +0000 (0:00:01.088) 0:28:40.686 ******** 2026-01-30 06:17:38.872675 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.872680 | orchestrator | 2026-01-30 06:17:38.872685 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-30 06:17:38.872691 | orchestrator | Friday 30 January 2026 06:16:48 +0000 (0:00:01.528) 0:28:42.214 ******** 2026-01-30 06:17:38.872696 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872701 | orchestrator | 2026-01-30 06:17:38.872707 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-30 06:17:38.872712 | orchestrator | Friday 30 January 2026 06:16:49 +0000 (0:00:01.080) 0:28:43.294 ******** 2026-01-30 06:17:38.872717 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:17:38.872763 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:17:38.872769 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:17:38.872775 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-01-30 06:17:38.872780 | orchestrator | 2026-01-30 06:17:38.872786 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-30 06:17:38.872791 | orchestrator | Friday 30 January 2026 06:16:57 +0000 (0:00:07.655) 0:28:50.950 ******** 2026-01-30 06:17:38.872797 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.872802 | orchestrator | 2026-01-30 06:17:38.872807 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-30 06:17:38.872814 | orchestrator | Friday 30 January 2026 06:16:58 +0000 (0:00:01.203) 0:28:52.153 ******** 2026-01-30 06:17:38.872821 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 06:17:38.872827 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-30 06:17:38.872834 | orchestrator | 2026-01-30 06:17:38.872840 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:17:38.872846 | orchestrator | Friday 30 January 2026 06:17:01 +0000 (0:00:03.333) 0:28:55.487 ******** 2026-01-30 06:17:38.872852 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-30 06:17:38.872859 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-30 06:17:38.872865 | orchestrator | 2026-01-30 06:17:38.872871 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-30 06:17:38.872878 | orchestrator | Friday 30 January 2026 06:17:03 +0000 (0:00:02.043) 0:28:57.531 ******** 2026-01-30 06:17:38.872884 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.872890 | orchestrator | 2026-01-30 06:17:38.872896 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-30 06:17:38.872903 | orchestrator | Friday 30 January 2026 06:17:05 +0000 (0:00:01.664) 0:28:59.195 ******** 2026-01-30 06:17:38.872909 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872915 | orchestrator | 2026-01-30 06:17:38.872921 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:17:38.872943 | orchestrator | Friday 30 January 2026 06:17:06 +0000 (0:00:00.767) 0:28:59.963 ******** 2026-01-30 06:17:38.872954 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.872961 | orchestrator | 2026-01-30 06:17:38.872967 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:17:38.872989 | orchestrator | Friday 30 January 2026 06:17:07 +0000 (0:00:00.760) 0:29:00.724 ******** 2026-01-30 06:17:38.872995 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-01-30 06:17:38.873001 | orchestrator | 2026-01-30 06:17:38.873007 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-30 06:17:38.873014 | orchestrator | Friday 30 January 2026 06:17:08 +0000 (0:00:01.112) 0:29:01.836 ******** 2026-01-30 06:17:38.873020 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.873026 | orchestrator | 2026-01-30 06:17:38.873033 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-30 06:17:38.873038 | orchestrator | Friday 30 January 2026 06:17:09 +0000 (0:00:01.147) 0:29:02.984 ******** 2026-01-30 06:17:38.873043 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.873049 | orchestrator | 2026-01-30 06:17:38.873054 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-30 06:17:38.873059 | orchestrator | Friday 30 January 2026 06:17:10 +0000 (0:00:01.136) 0:29:04.120 ******** 2026-01-30 06:17:38.873065 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-01-30 06:17:38.873070 | orchestrator | 2026-01-30 06:17:38.873075 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-30 06:17:38.873081 | orchestrator | Friday 30 January 2026 06:17:11 +0000 (0:00:01.247) 0:29:05.368 ******** 2026-01-30 06:17:38.873086 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.873091 | orchestrator | 2026-01-30 06:17:38.873096 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-30 06:17:38.873102 | orchestrator | Friday 30 January 2026 06:17:13 +0000 (0:00:02.058) 0:29:07.426 ******** 2026-01-30 06:17:38.873107 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.873113 | orchestrator | 2026-01-30 06:17:38.873118 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-30 06:17:38.873123 | orchestrator | Friday 30 January 2026 06:17:15 +0000 (0:00:02.067) 0:29:09.493 ******** 2026-01-30 06:17:38.873128 | orchestrator | ok: [testbed-node-1] 2026-01-30 06:17:38.873134 | orchestrator | 2026-01-30 06:17:38.873139 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-30 06:17:38.873144 | orchestrator | Friday 30 January 2026 06:17:18 +0000 (0:00:02.542) 0:29:12.036 ******** 2026-01-30 06:17:38.873150 | orchestrator | changed: [testbed-node-1] 2026-01-30 06:17:38.873155 | orchestrator | 2026-01-30 06:17:38.873161 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:17:38.873166 | orchestrator | Friday 30 January 2026 06:17:22 +0000 (0:00:03.996) 0:29:16.032 ******** 2026-01-30 06:17:38.873171 | orchestrator | skipping: [testbed-node-1] 2026-01-30 06:17:38.873177 | orchestrator | 2026-01-30 06:17:38.873182 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-01-30 06:17:38.873187 | orchestrator | 2026-01-30 06:17:38.873192 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-01-30 06:17:38.873198 | orchestrator | Friday 30 January 2026 06:17:23 +0000 (0:00:00.970) 0:29:17.002 ******** 2026-01-30 06:17:38.873203 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:17:38.873208 | orchestrator | 2026-01-30 06:17:38.873213 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-01-30 06:17:38.873219 | orchestrator | Friday 30 January 2026 06:17:25 +0000 (0:00:02.561) 0:29:19.564 ******** 2026-01-30 06:17:38.873224 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:17:38.873229 | orchestrator | 2026-01-30 06:17:38.873235 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:17:38.873240 | orchestrator | Friday 30 January 2026 06:17:28 +0000 (0:00:02.089) 0:29:21.654 ******** 2026-01-30 06:17:38.873250 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-01-30 06:17:38.873256 | orchestrator | 2026-01-30 06:17:38.873261 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:17:38.873266 | orchestrator | Friday 30 January 2026 06:17:29 +0000 (0:00:01.102) 0:29:22.757 ******** 2026-01-30 06:17:38.873272 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873277 | orchestrator | 2026-01-30 06:17:38.873282 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:17:38.873288 | orchestrator | Friday 30 January 2026 06:17:30 +0000 (0:00:01.470) 0:29:24.227 ******** 2026-01-30 06:17:38.873293 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873298 | orchestrator | 2026-01-30 06:17:38.873304 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:17:38.873309 | orchestrator | Friday 30 January 2026 06:17:31 +0000 (0:00:01.120) 0:29:25.348 ******** 2026-01-30 06:17:38.873315 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873320 | orchestrator | 2026-01-30 06:17:38.873326 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:17:38.873331 | orchestrator | Friday 30 January 2026 06:17:33 +0000 (0:00:01.464) 0:29:26.812 ******** 2026-01-30 06:17:38.873336 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873342 | orchestrator | 2026-01-30 06:17:38.873347 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:17:38.873352 | orchestrator | Friday 30 January 2026 06:17:34 +0000 (0:00:01.109) 0:29:27.922 ******** 2026-01-30 06:17:38.873358 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873363 | orchestrator | 2026-01-30 06:17:38.873368 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:17:38.873374 | orchestrator | Friday 30 January 2026 06:17:35 +0000 (0:00:01.115) 0:29:29.038 ******** 2026-01-30 06:17:38.873379 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873384 | orchestrator | 2026-01-30 06:17:38.873389 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:17:38.873395 | orchestrator | Friday 30 January 2026 06:17:36 +0000 (0:00:01.109) 0:29:30.147 ******** 2026-01-30 06:17:38.873400 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:17:38.873405 | orchestrator | 2026-01-30 06:17:38.873415 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:17:38.873421 | orchestrator | Friday 30 January 2026 06:17:37 +0000 (0:00:01.119) 0:29:31.266 ******** 2026-01-30 06:17:38.873426 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:17:38.873432 | orchestrator | 2026-01-30 06:17:38.873441 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:18:04.016274 | orchestrator | Friday 30 January 2026 06:17:38 +0000 (0:00:01.204) 0:29:32.471 ******** 2026-01-30 06:18:04.016387 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:18:04.016401 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:18:04.016411 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:18:04.016419 | orchestrator | 2026-01-30 06:18:04.016428 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:18:04.016436 | orchestrator | Friday 30 January 2026 06:17:40 +0000 (0:00:02.049) 0:29:34.520 ******** 2026-01-30 06:18:04.016445 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:04.016453 | orchestrator | 2026-01-30 06:18:04.016462 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:18:04.016470 | orchestrator | Friday 30 January 2026 06:17:42 +0000 (0:00:01.242) 0:29:35.763 ******** 2026-01-30 06:18:04.016479 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:18:04.016488 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:18:04.016496 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:18:04.016503 | orchestrator | 2026-01-30 06:18:04.016534 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:18:04.016542 | orchestrator | Friday 30 January 2026 06:17:45 +0000 (0:00:03.331) 0:29:39.094 ******** 2026-01-30 06:18:04.016551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:18:04.016559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:18:04.016568 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:18:04.016576 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.016584 | orchestrator | 2026-01-30 06:18:04.016592 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:18:04.016600 | orchestrator | Friday 30 January 2026 06:17:47 +0000 (0:00:01.762) 0:29:40.857 ******** 2026-01-30 06:18:04.016610 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016621 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016629 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016638 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.016646 | orchestrator | 2026-01-30 06:18:04.016655 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:18:04.016663 | orchestrator | Friday 30 January 2026 06:17:49 +0000 (0:00:01.939) 0:29:42.796 ******** 2026-01-30 06:18:04.016674 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016685 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016694 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:04.016702 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.016711 | orchestrator | 2026-01-30 06:18:04.016796 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:18:04.016820 | orchestrator | Friday 30 January 2026 06:17:50 +0000 (0:00:01.188) 0:29:43.985 ******** 2026-01-30 06:18:04.016850 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:17:42.697721', 'end': '2026-01-30 06:17:42.745050', 'delta': '0:00:00.047329', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:18:04.016868 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:17:43.684604', 'end': '2026-01-30 06:17:43.735041', 'delta': '0:00:00.050437', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:18:04.016877 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:17:44.277120', 'end': '2026-01-30 06:17:44.315037', 'delta': '0:00:00.037917', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:18:04.016885 | orchestrator | 2026-01-30 06:18:04.016893 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:18:04.016901 | orchestrator | Friday 30 January 2026 06:17:51 +0000 (0:00:01.175) 0:29:45.161 ******** 2026-01-30 06:18:04.016910 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:04.016917 | orchestrator | 2026-01-30 06:18:04.016925 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:18:04.016934 | orchestrator | Friday 30 January 2026 06:17:52 +0000 (0:00:01.307) 0:29:46.468 ******** 2026-01-30 06:18:04.016942 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.016950 | orchestrator | 2026-01-30 06:18:04.016958 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:18:04.016967 | orchestrator | Friday 30 January 2026 06:17:54 +0000 (0:00:01.222) 0:29:47.691 ******** 2026-01-30 06:18:04.016973 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:04.016978 | orchestrator | 2026-01-30 06:18:04.016984 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:18:04.016989 | orchestrator | Friday 30 January 2026 06:17:55 +0000 (0:00:01.115) 0:29:48.807 ******** 2026-01-30 06:18:04.016994 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:18:04.017000 | orchestrator | 2026-01-30 06:18:04.017005 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:18:04.017010 | orchestrator | Friday 30 January 2026 06:17:57 +0000 (0:00:01.994) 0:29:50.801 ******** 2026-01-30 06:18:04.017016 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:04.017022 | orchestrator | 2026-01-30 06:18:04.017027 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:18:04.017033 | orchestrator | Friday 30 January 2026 06:17:58 +0000 (0:00:01.133) 0:29:51.934 ******** 2026-01-30 06:18:04.017038 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.017043 | orchestrator | 2026-01-30 06:18:04.017049 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:18:04.017054 | orchestrator | Friday 30 January 2026 06:17:59 +0000 (0:00:01.118) 0:29:53.052 ******** 2026-01-30 06:18:04.017062 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.017077 | orchestrator | 2026-01-30 06:18:04.017084 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:18:04.017091 | orchestrator | Friday 30 January 2026 06:18:00 +0000 (0:00:01.203) 0:29:54.256 ******** 2026-01-30 06:18:04.017099 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.017107 | orchestrator | 2026-01-30 06:18:04.017115 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:18:04.017128 | orchestrator | Friday 30 January 2026 06:18:01 +0000 (0:00:01.115) 0:29:55.372 ******** 2026-01-30 06:18:04.017136 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.017143 | orchestrator | 2026-01-30 06:18:04.017151 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:18:04.017159 | orchestrator | Friday 30 January 2026 06:18:02 +0000 (0:00:01.109) 0:29:56.482 ******** 2026-01-30 06:18:04.017167 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:04.017176 | orchestrator | 2026-01-30 06:18:04.017192 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:18:11.043371 | orchestrator | Friday 30 January 2026 06:18:04 +0000 (0:00:01.133) 0:29:57.615 ******** 2026-01-30 06:18:11.043466 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:11.043478 | orchestrator | 2026-01-30 06:18:11.043487 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:18:11.043495 | orchestrator | Friday 30 January 2026 06:18:05 +0000 (0:00:01.121) 0:29:58.736 ******** 2026-01-30 06:18:11.043504 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:11.043512 | orchestrator | 2026-01-30 06:18:11.043520 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:18:11.043528 | orchestrator | Friday 30 January 2026 06:18:06 +0000 (0:00:01.150) 0:29:59.887 ******** 2026-01-30 06:18:11.043536 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:11.043544 | orchestrator | 2026-01-30 06:18:11.043552 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:18:11.043560 | orchestrator | Friday 30 January 2026 06:18:07 +0000 (0:00:01.126) 0:30:01.013 ******** 2026-01-30 06:18:11.043568 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:11.043576 | orchestrator | 2026-01-30 06:18:11.043584 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:18:11.043592 | orchestrator | Friday 30 January 2026 06:18:08 +0000 (0:00:01.130) 0:30:02.144 ******** 2026-01-30 06:18:11.043602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:18:11.043662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:18:11.043796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:18:11.043820 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:11.043828 | orchestrator | 2026-01-30 06:18:11.043836 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:18:11.043845 | orchestrator | Friday 30 January 2026 06:18:09 +0000 (0:00:01.266) 0:30:03.411 ******** 2026-01-30 06:18:11.043854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:11.043876 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605306 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605434 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605454 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605492 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605501 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605540 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7b944efd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b944efd-69bd-418c-961b-5e326c11b2a6-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605562 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:18:18.605569 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:18.605578 | orchestrator | 2026-01-30 06:18:18.605585 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:18:18.605593 | orchestrator | Friday 30 January 2026 06:18:11 +0000 (0:00:01.233) 0:30:04.645 ******** 2026-01-30 06:18:18.605599 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:18.605606 | orchestrator | 2026-01-30 06:18:18.605613 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:18:18.605619 | orchestrator | Friday 30 January 2026 06:18:12 +0000 (0:00:01.526) 0:30:06.172 ******** 2026-01-30 06:18:18.605625 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:18.605632 | orchestrator | 2026-01-30 06:18:18.605638 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:18:18.605644 | orchestrator | Friday 30 January 2026 06:18:13 +0000 (0:00:01.091) 0:30:07.263 ******** 2026-01-30 06:18:18.605650 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:18.605656 | orchestrator | 2026-01-30 06:18:18.605662 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:18:18.605668 | orchestrator | Friday 30 January 2026 06:18:15 +0000 (0:00:01.449) 0:30:08.712 ******** 2026-01-30 06:18:18.605678 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:18.605685 | orchestrator | 2026-01-30 06:18:18.605691 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:18:18.605697 | orchestrator | Friday 30 January 2026 06:18:16 +0000 (0:00:01.135) 0:30:09.848 ******** 2026-01-30 06:18:18.605703 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:18.605709 | orchestrator | 2026-01-30 06:18:18.605757 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:18:18.605765 | orchestrator | Friday 30 January 2026 06:18:17 +0000 (0:00:01.198) 0:30:11.047 ******** 2026-01-30 06:18:18.605771 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:18.605777 | orchestrator | 2026-01-30 06:18:18.605783 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:18:18.605794 | orchestrator | Friday 30 January 2026 06:18:18 +0000 (0:00:01.161) 0:30:12.208 ******** 2026-01-30 06:18:54.518940 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-30 06:18:54.519043 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-30 06:18:54.519056 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:18:54.519066 | orchestrator | 2026-01-30 06:18:54.519075 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:18:54.519085 | orchestrator | Friday 30 January 2026 06:18:20 +0000 (0:00:01.994) 0:30:14.203 ******** 2026-01-30 06:18:54.519095 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-30 06:18:54.519104 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-30 06:18:54.519116 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-30 06:18:54.519165 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519184 | orchestrator | 2026-01-30 06:18:54.519198 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:18:54.519213 | orchestrator | Friday 30 January 2026 06:18:21 +0000 (0:00:01.138) 0:30:15.342 ******** 2026-01-30 06:18:54.519228 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519241 | orchestrator | 2026-01-30 06:18:54.519256 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:18:54.519271 | orchestrator | Friday 30 January 2026 06:18:22 +0000 (0:00:01.237) 0:30:16.579 ******** 2026-01-30 06:18:54.519286 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:18:54.519303 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:18:54.519318 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:18:54.519333 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:18:54.519349 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:18:54.519366 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:18:54.519382 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:18:54.519394 | orchestrator | 2026-01-30 06:18:54.519430 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:18:54.519449 | orchestrator | Friday 30 January 2026 06:18:24 +0000 (0:00:01.760) 0:30:18.340 ******** 2026-01-30 06:18:54.519458 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:18:54.519467 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:18:54.519476 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:18:54.519486 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:18:54.519496 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:18:54.519505 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:18:54.519516 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:18:54.519525 | orchestrator | 2026-01-30 06:18:54.519535 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:18:54.519544 | orchestrator | Friday 30 January 2026 06:18:26 +0000 (0:00:02.222) 0:30:20.562 ******** 2026-01-30 06:18:54.519554 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-01-30 06:18:54.519565 | orchestrator | 2026-01-30 06:18:54.519575 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:18:54.519584 | orchestrator | Friday 30 January 2026 06:18:28 +0000 (0:00:01.117) 0:30:21.679 ******** 2026-01-30 06:18:54.519595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-01-30 06:18:54.519604 | orchestrator | 2026-01-30 06:18:54.519614 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:18:54.519624 | orchestrator | Friday 30 January 2026 06:18:29 +0000 (0:00:01.091) 0:30:22.771 ******** 2026-01-30 06:18:54.519634 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.519644 | orchestrator | 2026-01-30 06:18:54.519653 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:18:54.519663 | orchestrator | Friday 30 January 2026 06:18:30 +0000 (0:00:01.546) 0:30:24.318 ******** 2026-01-30 06:18:54.519673 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519683 | orchestrator | 2026-01-30 06:18:54.519693 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:18:54.519769 | orchestrator | Friday 30 January 2026 06:18:31 +0000 (0:00:01.113) 0:30:25.431 ******** 2026-01-30 06:18:54.519781 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519792 | orchestrator | 2026-01-30 06:18:54.519803 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:18:54.519826 | orchestrator | Friday 30 January 2026 06:18:32 +0000 (0:00:01.081) 0:30:26.513 ******** 2026-01-30 06:18:54.519835 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519843 | orchestrator | 2026-01-30 06:18:54.519852 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:18:54.519860 | orchestrator | Friday 30 January 2026 06:18:34 +0000 (0:00:01.117) 0:30:27.630 ******** 2026-01-30 06:18:54.519869 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.519877 | orchestrator | 2026-01-30 06:18:54.519886 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:18:54.519894 | orchestrator | Friday 30 January 2026 06:18:35 +0000 (0:00:01.573) 0:30:29.204 ******** 2026-01-30 06:18:54.519903 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519911 | orchestrator | 2026-01-30 06:18:54.519920 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:18:54.519948 | orchestrator | Friday 30 January 2026 06:18:36 +0000 (0:00:01.101) 0:30:30.305 ******** 2026-01-30 06:18:54.519957 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.519966 | orchestrator | 2026-01-30 06:18:54.519974 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:18:54.519983 | orchestrator | Friday 30 January 2026 06:18:37 +0000 (0:00:01.125) 0:30:31.430 ******** 2026-01-30 06:18:54.519992 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520000 | orchestrator | 2026-01-30 06:18:54.520009 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:18:54.520017 | orchestrator | Friday 30 January 2026 06:18:39 +0000 (0:00:01.553) 0:30:32.984 ******** 2026-01-30 06:18:54.520026 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520035 | orchestrator | 2026-01-30 06:18:54.520043 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:18:54.520052 | orchestrator | Friday 30 January 2026 06:18:40 +0000 (0:00:01.582) 0:30:34.567 ******** 2026-01-30 06:18:54.520060 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520069 | orchestrator | 2026-01-30 06:18:54.520077 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:18:54.520086 | orchestrator | Friday 30 January 2026 06:18:41 +0000 (0:00:00.769) 0:30:35.336 ******** 2026-01-30 06:18:54.520095 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520103 | orchestrator | 2026-01-30 06:18:54.520112 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:18:54.520120 | orchestrator | Friday 30 January 2026 06:18:42 +0000 (0:00:00.799) 0:30:36.136 ******** 2026-01-30 06:18:54.520129 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520138 | orchestrator | 2026-01-30 06:18:54.520146 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:18:54.520155 | orchestrator | Friday 30 January 2026 06:18:43 +0000 (0:00:00.773) 0:30:36.910 ******** 2026-01-30 06:18:54.520163 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520172 | orchestrator | 2026-01-30 06:18:54.520180 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:18:54.520189 | orchestrator | Friday 30 January 2026 06:18:44 +0000 (0:00:00.751) 0:30:37.661 ******** 2026-01-30 06:18:54.520197 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520206 | orchestrator | 2026-01-30 06:18:54.520214 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:18:54.520223 | orchestrator | Friday 30 January 2026 06:18:44 +0000 (0:00:00.735) 0:30:38.397 ******** 2026-01-30 06:18:54.520232 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520240 | orchestrator | 2026-01-30 06:18:54.520249 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:18:54.520264 | orchestrator | Friday 30 January 2026 06:18:45 +0000 (0:00:00.750) 0:30:39.147 ******** 2026-01-30 06:18:54.520273 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520281 | orchestrator | 2026-01-30 06:18:54.520290 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:18:54.520298 | orchestrator | Friday 30 January 2026 06:18:46 +0000 (0:00:00.737) 0:30:39.885 ******** 2026-01-30 06:18:54.520307 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520316 | orchestrator | 2026-01-30 06:18:54.520324 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:18:54.520333 | orchestrator | Friday 30 January 2026 06:18:47 +0000 (0:00:00.754) 0:30:40.639 ******** 2026-01-30 06:18:54.520341 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520350 | orchestrator | 2026-01-30 06:18:54.520358 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:18:54.520367 | orchestrator | Friday 30 January 2026 06:18:47 +0000 (0:00:00.754) 0:30:41.394 ******** 2026-01-30 06:18:54.520376 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:18:54.520384 | orchestrator | 2026-01-30 06:18:54.520393 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:18:54.520401 | orchestrator | Friday 30 January 2026 06:18:48 +0000 (0:00:00.782) 0:30:42.177 ******** 2026-01-30 06:18:54.520410 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520418 | orchestrator | 2026-01-30 06:18:54.520427 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:18:54.520435 | orchestrator | Friday 30 January 2026 06:18:49 +0000 (0:00:00.748) 0:30:42.925 ******** 2026-01-30 06:18:54.520444 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520452 | orchestrator | 2026-01-30 06:18:54.520461 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:18:54.520469 | orchestrator | Friday 30 January 2026 06:18:50 +0000 (0:00:00.742) 0:30:43.668 ******** 2026-01-30 06:18:54.520478 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520486 | orchestrator | 2026-01-30 06:18:54.520495 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:18:54.520504 | orchestrator | Friday 30 January 2026 06:18:50 +0000 (0:00:00.723) 0:30:44.391 ******** 2026-01-30 06:18:54.520512 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520521 | orchestrator | 2026-01-30 06:18:54.520529 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:18:54.520538 | orchestrator | Friday 30 January 2026 06:18:51 +0000 (0:00:00.732) 0:30:45.124 ******** 2026-01-30 06:18:54.520546 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520555 | orchestrator | 2026-01-30 06:18:54.520568 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:18:54.520577 | orchestrator | Friday 30 January 2026 06:18:52 +0000 (0:00:00.741) 0:30:45.866 ******** 2026-01-30 06:18:54.520586 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520595 | orchestrator | 2026-01-30 06:18:54.520603 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:18:54.520612 | orchestrator | Friday 30 January 2026 06:18:52 +0000 (0:00:00.745) 0:30:46.611 ******** 2026-01-30 06:18:54.520620 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:18:54.520629 | orchestrator | 2026-01-30 06:18:54.520637 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:18:54.520646 | orchestrator | Friday 30 January 2026 06:18:53 +0000 (0:00:00.747) 0:30:47.359 ******** 2026-01-30 06:18:54.520659 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.719811 | orchestrator | 2026-01-30 06:19:41.719914 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:19:41.719927 | orchestrator | Friday 30 January 2026 06:18:54 +0000 (0:00:00.760) 0:30:48.119 ******** 2026-01-30 06:19:41.719935 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.719943 | orchestrator | 2026-01-30 06:19:41.719949 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:19:41.719977 | orchestrator | Friday 30 January 2026 06:18:55 +0000 (0:00:00.774) 0:30:48.894 ******** 2026-01-30 06:19:41.719984 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.719992 | orchestrator | 2026-01-30 06:19:41.719998 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:19:41.720005 | orchestrator | Friday 30 January 2026 06:18:56 +0000 (0:00:00.743) 0:30:49.637 ******** 2026-01-30 06:19:41.720013 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720023 | orchestrator | 2026-01-30 06:19:41.720030 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:19:41.720036 | orchestrator | Friday 30 January 2026 06:18:56 +0000 (0:00:00.763) 0:30:50.400 ******** 2026-01-30 06:19:41.720039 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720043 | orchestrator | 2026-01-30 06:19:41.720047 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:19:41.720051 | orchestrator | Friday 30 January 2026 06:18:57 +0000 (0:00:00.901) 0:30:51.302 ******** 2026-01-30 06:19:41.720054 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720059 | orchestrator | 2026-01-30 06:19:41.720063 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:19:41.720067 | orchestrator | Friday 30 January 2026 06:18:59 +0000 (0:00:01.629) 0:30:52.931 ******** 2026-01-30 06:19:41.720071 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720074 | orchestrator | 2026-01-30 06:19:41.720078 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:19:41.720082 | orchestrator | Friday 30 January 2026 06:19:01 +0000 (0:00:02.036) 0:30:54.968 ******** 2026-01-30 06:19:41.720085 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-01-30 06:19:41.720090 | orchestrator | 2026-01-30 06:19:41.720094 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:19:41.720098 | orchestrator | Friday 30 January 2026 06:19:02 +0000 (0:00:01.090) 0:30:56.059 ******** 2026-01-30 06:19:41.720102 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720105 | orchestrator | 2026-01-30 06:19:41.720109 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:19:41.720113 | orchestrator | Friday 30 January 2026 06:19:03 +0000 (0:00:01.153) 0:30:57.212 ******** 2026-01-30 06:19:41.720116 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720120 | orchestrator | 2026-01-30 06:19:41.720124 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:19:41.720127 | orchestrator | Friday 30 January 2026 06:19:04 +0000 (0:00:01.114) 0:30:58.327 ******** 2026-01-30 06:19:41.720131 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:19:41.720135 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:19:41.720139 | orchestrator | 2026-01-30 06:19:41.720143 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:19:41.720147 | orchestrator | Friday 30 January 2026 06:19:06 +0000 (0:00:01.861) 0:31:00.189 ******** 2026-01-30 06:19:41.720150 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720154 | orchestrator | 2026-01-30 06:19:41.720158 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:19:41.720161 | orchestrator | Friday 30 January 2026 06:19:08 +0000 (0:00:01.526) 0:31:01.715 ******** 2026-01-30 06:19:41.720165 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720169 | orchestrator | 2026-01-30 06:19:41.720172 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:19:41.720176 | orchestrator | Friday 30 January 2026 06:19:09 +0000 (0:00:01.115) 0:31:02.831 ******** 2026-01-30 06:19:41.720180 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720183 | orchestrator | 2026-01-30 06:19:41.720187 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:19:41.720190 | orchestrator | Friday 30 January 2026 06:19:09 +0000 (0:00:00.781) 0:31:03.612 ******** 2026-01-30 06:19:41.720198 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720202 | orchestrator | 2026-01-30 06:19:41.720206 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:19:41.720209 | orchestrator | Friday 30 January 2026 06:19:10 +0000 (0:00:00.771) 0:31:04.384 ******** 2026-01-30 06:19:41.720213 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-01-30 06:19:41.720217 | orchestrator | 2026-01-30 06:19:41.720221 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:19:41.720225 | orchestrator | Friday 30 January 2026 06:19:12 +0000 (0:00:01.238) 0:31:05.623 ******** 2026-01-30 06:19:41.720229 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720232 | orchestrator | 2026-01-30 06:19:41.720247 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:19:41.720251 | orchestrator | Friday 30 January 2026 06:19:13 +0000 (0:00:01.709) 0:31:07.333 ******** 2026-01-30 06:19:41.720254 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:19:41.720258 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:19:41.720262 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:19:41.720265 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720269 | orchestrator | 2026-01-30 06:19:41.720273 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:19:41.720277 | orchestrator | Friday 30 January 2026 06:19:14 +0000 (0:00:01.120) 0:31:08.453 ******** 2026-01-30 06:19:41.720293 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720297 | orchestrator | 2026-01-30 06:19:41.720301 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:19:41.720305 | orchestrator | Friday 30 January 2026 06:19:15 +0000 (0:00:01.115) 0:31:09.569 ******** 2026-01-30 06:19:41.720309 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720313 | orchestrator | 2026-01-30 06:19:41.720317 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:19:41.720322 | orchestrator | Friday 30 January 2026 06:19:17 +0000 (0:00:01.157) 0:31:10.727 ******** 2026-01-30 06:19:41.720326 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720330 | orchestrator | 2026-01-30 06:19:41.720334 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:19:41.720338 | orchestrator | Friday 30 January 2026 06:19:18 +0000 (0:00:01.156) 0:31:11.883 ******** 2026-01-30 06:19:41.720343 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720347 | orchestrator | 2026-01-30 06:19:41.720351 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:19:41.720355 | orchestrator | Friday 30 January 2026 06:19:19 +0000 (0:00:01.143) 0:31:13.027 ******** 2026-01-30 06:19:41.720359 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720363 | orchestrator | 2026-01-30 06:19:41.720367 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:19:41.720371 | orchestrator | Friday 30 January 2026 06:19:20 +0000 (0:00:00.763) 0:31:13.791 ******** 2026-01-30 06:19:41.720376 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720380 | orchestrator | 2026-01-30 06:19:41.720384 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:19:41.720388 | orchestrator | Friday 30 January 2026 06:19:22 +0000 (0:00:02.241) 0:31:16.033 ******** 2026-01-30 06:19:41.720392 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720396 | orchestrator | 2026-01-30 06:19:41.720400 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:19:41.720405 | orchestrator | Friday 30 January 2026 06:19:23 +0000 (0:00:00.767) 0:31:16.801 ******** 2026-01-30 06:19:41.720409 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-01-30 06:19:41.720413 | orchestrator | 2026-01-30 06:19:41.720417 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:19:41.720424 | orchestrator | Friday 30 January 2026 06:19:24 +0000 (0:00:01.086) 0:31:17.888 ******** 2026-01-30 06:19:41.720428 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720433 | orchestrator | 2026-01-30 06:19:41.720437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:19:41.720441 | orchestrator | Friday 30 January 2026 06:19:25 +0000 (0:00:01.116) 0:31:19.004 ******** 2026-01-30 06:19:41.720445 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720449 | orchestrator | 2026-01-30 06:19:41.720453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:19:41.720457 | orchestrator | Friday 30 January 2026 06:19:26 +0000 (0:00:01.163) 0:31:20.167 ******** 2026-01-30 06:19:41.720462 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720466 | orchestrator | 2026-01-30 06:19:41.720470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:19:41.720474 | orchestrator | Friday 30 January 2026 06:19:27 +0000 (0:00:01.111) 0:31:21.279 ******** 2026-01-30 06:19:41.720479 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720483 | orchestrator | 2026-01-30 06:19:41.720487 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:19:41.720491 | orchestrator | Friday 30 January 2026 06:19:28 +0000 (0:00:01.154) 0:31:22.433 ******** 2026-01-30 06:19:41.720496 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720499 | orchestrator | 2026-01-30 06:19:41.720503 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:19:41.720507 | orchestrator | Friday 30 January 2026 06:19:29 +0000 (0:00:01.121) 0:31:23.555 ******** 2026-01-30 06:19:41.720510 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720514 | orchestrator | 2026-01-30 06:19:41.720518 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:19:41.720521 | orchestrator | Friday 30 January 2026 06:19:31 +0000 (0:00:01.154) 0:31:24.710 ******** 2026-01-30 06:19:41.720525 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720529 | orchestrator | 2026-01-30 06:19:41.720532 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:19:41.720536 | orchestrator | Friday 30 January 2026 06:19:32 +0000 (0:00:01.115) 0:31:25.825 ******** 2026-01-30 06:19:41.720539 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:19:41.720543 | orchestrator | 2026-01-30 06:19:41.720547 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:19:41.720550 | orchestrator | Friday 30 January 2026 06:19:33 +0000 (0:00:01.105) 0:31:26.931 ******** 2026-01-30 06:19:41.720554 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:19:41.720558 | orchestrator | 2026-01-30 06:19:41.720561 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:19:41.720565 | orchestrator | Friday 30 January 2026 06:19:34 +0000 (0:00:00.793) 0:31:27.724 ******** 2026-01-30 06:19:41.720571 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-01-30 06:19:41.720575 | orchestrator | 2026-01-30 06:19:41.720579 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:19:41.720583 | orchestrator | Friday 30 January 2026 06:19:35 +0000 (0:00:01.086) 0:31:28.811 ******** 2026-01-30 06:19:41.720586 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-01-30 06:19:41.720591 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-30 06:19:41.720594 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-30 06:19:41.720598 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-30 06:19:41.720602 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-30 06:19:41.720605 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-30 06:19:41.720612 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-30 06:20:17.270190 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:20:17.270308 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:20:17.270321 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:20:17.270328 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:20:17.270336 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:20:17.270343 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:20:17.270351 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:20:17.270358 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-01-30 06:20:17.270366 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-01-30 06:20:17.270373 | orchestrator | 2026-01-30 06:20:17.270381 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:20:17.270388 | orchestrator | Friday 30 January 2026 06:19:41 +0000 (0:00:06.504) 0:31:35.315 ******** 2026-01-30 06:20:17.270395 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270402 | orchestrator | 2026-01-30 06:20:17.270410 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:20:17.270417 | orchestrator | Friday 30 January 2026 06:19:42 +0000 (0:00:00.756) 0:31:36.072 ******** 2026-01-30 06:20:17.270424 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270431 | orchestrator | 2026-01-30 06:20:17.270438 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:20:17.270445 | orchestrator | Friday 30 January 2026 06:19:43 +0000 (0:00:00.781) 0:31:36.854 ******** 2026-01-30 06:20:17.270452 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270460 | orchestrator | 2026-01-30 06:20:17.270467 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:20:17.270474 | orchestrator | Friday 30 January 2026 06:19:44 +0000 (0:00:00.798) 0:31:37.652 ******** 2026-01-30 06:20:17.270481 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270488 | orchestrator | 2026-01-30 06:20:17.270495 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:20:17.270502 | orchestrator | Friday 30 January 2026 06:19:44 +0000 (0:00:00.759) 0:31:38.411 ******** 2026-01-30 06:20:17.270509 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270516 | orchestrator | 2026-01-30 06:20:17.270523 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:20:17.270530 | orchestrator | Friday 30 January 2026 06:19:45 +0000 (0:00:00.805) 0:31:39.217 ******** 2026-01-30 06:20:17.270538 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270545 | orchestrator | 2026-01-30 06:20:17.270552 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:20:17.270559 | orchestrator | Friday 30 January 2026 06:19:46 +0000 (0:00:00.773) 0:31:39.990 ******** 2026-01-30 06:20:17.270566 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270573 | orchestrator | 2026-01-30 06:20:17.270580 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:20:17.270588 | orchestrator | Friday 30 January 2026 06:19:47 +0000 (0:00:00.781) 0:31:40.772 ******** 2026-01-30 06:20:17.270595 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270602 | orchestrator | 2026-01-30 06:20:17.270609 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:20:17.270616 | orchestrator | Friday 30 January 2026 06:19:47 +0000 (0:00:00.767) 0:31:41.540 ******** 2026-01-30 06:20:17.270623 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270631 | orchestrator | 2026-01-30 06:20:17.270638 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:20:17.270645 | orchestrator | Friday 30 January 2026 06:19:48 +0000 (0:00:00.790) 0:31:42.331 ******** 2026-01-30 06:20:17.270652 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270659 | orchestrator | 2026-01-30 06:20:17.270673 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:20:17.270680 | orchestrator | Friday 30 January 2026 06:19:49 +0000 (0:00:00.763) 0:31:43.095 ******** 2026-01-30 06:20:17.270687 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270715 | orchestrator | 2026-01-30 06:20:17.270723 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:20:17.270730 | orchestrator | Friday 30 January 2026 06:19:50 +0000 (0:00:00.766) 0:31:43.861 ******** 2026-01-30 06:20:17.270738 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270746 | orchestrator | 2026-01-30 06:20:17.270754 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:20:17.270763 | orchestrator | Friday 30 January 2026 06:19:51 +0000 (0:00:00.779) 0:31:44.640 ******** 2026-01-30 06:20:17.270771 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270779 | orchestrator | 2026-01-30 06:20:17.270787 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:20:17.270795 | orchestrator | Friday 30 January 2026 06:19:51 +0000 (0:00:00.875) 0:31:45.516 ******** 2026-01-30 06:20:17.270815 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270824 | orchestrator | 2026-01-30 06:20:17.270832 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:20:17.270840 | orchestrator | Friday 30 January 2026 06:19:52 +0000 (0:00:00.801) 0:31:46.318 ******** 2026-01-30 06:20:17.270849 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270857 | orchestrator | 2026-01-30 06:20:17.270865 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:20:17.270874 | orchestrator | Friday 30 January 2026 06:19:53 +0000 (0:00:00.862) 0:31:47.181 ******** 2026-01-30 06:20:17.270882 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270890 | orchestrator | 2026-01-30 06:20:17.270898 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:20:17.270906 | orchestrator | Friday 30 January 2026 06:19:54 +0000 (0:00:00.798) 0:31:47.979 ******** 2026-01-30 06:20:17.270927 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270937 | orchestrator | 2026-01-30 06:20:17.270945 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:20:17.270955 | orchestrator | Friday 30 January 2026 06:19:55 +0000 (0:00:00.758) 0:31:48.737 ******** 2026-01-30 06:20:17.270963 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.270971 | orchestrator | 2026-01-30 06:20:17.270979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:20:17.270987 | orchestrator | Friday 30 January 2026 06:19:55 +0000 (0:00:00.795) 0:31:49.533 ******** 2026-01-30 06:20:17.270995 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271003 | orchestrator | 2026-01-30 06:20:17.271011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:20:17.271020 | orchestrator | Friday 30 January 2026 06:19:56 +0000 (0:00:00.757) 0:31:50.291 ******** 2026-01-30 06:20:17.271029 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271037 | orchestrator | 2026-01-30 06:20:17.271044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:20:17.271051 | orchestrator | Friday 30 January 2026 06:19:57 +0000 (0:00:00.748) 0:31:51.040 ******** 2026-01-30 06:20:17.271058 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271065 | orchestrator | 2026-01-30 06:20:17.271072 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:20:17.271079 | orchestrator | Friday 30 January 2026 06:19:58 +0000 (0:00:00.772) 0:31:51.812 ******** 2026-01-30 06:20:17.271087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:20:17.271094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:20:17.271101 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:20:17.271108 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271121 | orchestrator | 2026-01-30 06:20:17.271128 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:20:17.271135 | orchestrator | Friday 30 January 2026 06:19:59 +0000 (0:00:01.058) 0:31:52.870 ******** 2026-01-30 06:20:17.271143 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:20:17.271150 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:20:17.271157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:20:17.271164 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271171 | orchestrator | 2026-01-30 06:20:17.271178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:20:17.271185 | orchestrator | Friday 30 January 2026 06:20:00 +0000 (0:00:01.056) 0:31:53.926 ******** 2026-01-30 06:20:17.271193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-30 06:20:17.271200 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-30 06:20:17.271207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-30 06:20:17.271214 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271221 | orchestrator | 2026-01-30 06:20:17.271228 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:20:17.271235 | orchestrator | Friday 30 January 2026 06:20:01 +0000 (0:00:01.037) 0:31:54.964 ******** 2026-01-30 06:20:17.271242 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271249 | orchestrator | 2026-01-30 06:20:17.271257 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:20:17.271264 | orchestrator | Friday 30 January 2026 06:20:02 +0000 (0:00:00.754) 0:31:55.719 ******** 2026-01-30 06:20:17.271271 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-30 06:20:17.271278 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271285 | orchestrator | 2026-01-30 06:20:17.271292 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:20:17.271299 | orchestrator | Friday 30 January 2026 06:20:02 +0000 (0:00:00.868) 0:31:56.587 ******** 2026-01-30 06:20:17.271307 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:20:17.271314 | orchestrator | 2026-01-30 06:20:17.271321 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:20:17.271328 | orchestrator | Friday 30 January 2026 06:20:04 +0000 (0:00:01.518) 0:31:58.106 ******** 2026-01-30 06:20:17.271336 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:20:17.271343 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:20:17.271350 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-30 06:20:17.271358 | orchestrator | 2026-01-30 06:20:17.271365 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-30 06:20:17.271372 | orchestrator | Friday 30 January 2026 06:20:05 +0000 (0:00:01.304) 0:31:59.410 ******** 2026-01-30 06:20:17.271437 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-01-30 06:20:17.271445 | orchestrator | 2026-01-30 06:20:17.271452 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-30 06:20:17.271464 | orchestrator | Friday 30 January 2026 06:20:06 +0000 (0:00:01.093) 0:32:00.504 ******** 2026-01-30 06:20:17.271471 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:20:17.271478 | orchestrator | 2026-01-30 06:20:17.271485 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-30 06:20:17.271492 | orchestrator | Friday 30 January 2026 06:20:08 +0000 (0:00:01.608) 0:32:02.113 ******** 2026-01-30 06:20:17.271499 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:20:17.271506 | orchestrator | 2026-01-30 06:20:17.271513 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-30 06:20:17.271520 | orchestrator | Friday 30 January 2026 06:20:09 +0000 (0:00:01.108) 0:32:03.221 ******** 2026-01-30 06:20:17.271527 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:20:17.271541 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:20:17.271553 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:21:04.118469 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-01-30 06:21:04.118531 | orchestrator | 2026-01-30 06:21:04.118538 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-30 06:21:04.118543 | orchestrator | Friday 30 January 2026 06:20:17 +0000 (0:00:07.647) 0:32:10.869 ******** 2026-01-30 06:21:04.118547 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.118553 | orchestrator | 2026-01-30 06:21:04.118557 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-30 06:21:04.118562 | orchestrator | Friday 30 January 2026 06:20:18 +0000 (0:00:01.134) 0:32:12.003 ******** 2026-01-30 06:21:04.118566 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 06:21:04.118571 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-30 06:21:04.118576 | orchestrator | 2026-01-30 06:21:04.118580 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:21:04.118584 | orchestrator | Friday 30 January 2026 06:20:21 +0000 (0:00:03.346) 0:32:15.349 ******** 2026-01-30 06:21:04.118589 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-30 06:21:04.118593 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-30 06:21:04.118600 | orchestrator | 2026-01-30 06:21:04.118607 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-30 06:21:04.118614 | orchestrator | Friday 30 January 2026 06:20:23 +0000 (0:00:01.998) 0:32:17.348 ******** 2026-01-30 06:21:04.118622 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.118628 | orchestrator | 2026-01-30 06:21:04.118635 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-30 06:21:04.118642 | orchestrator | Friday 30 January 2026 06:20:25 +0000 (0:00:01.573) 0:32:18.921 ******** 2026-01-30 06:21:04.118649 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.118657 | orchestrator | 2026-01-30 06:21:04.118663 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-30 06:21:04.118671 | orchestrator | Friday 30 January 2026 06:20:26 +0000 (0:00:00.777) 0:32:19.698 ******** 2026-01-30 06:21:04.118678 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.118685 | orchestrator | 2026-01-30 06:21:04.118725 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-30 06:21:04.118733 | orchestrator | Friday 30 January 2026 06:20:26 +0000 (0:00:00.753) 0:32:20.451 ******** 2026-01-30 06:21:04.118740 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-01-30 06:21:04.118748 | orchestrator | 2026-01-30 06:21:04.118754 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-30 06:21:04.118761 | orchestrator | Friday 30 January 2026 06:20:28 +0000 (0:00:01.234) 0:32:21.686 ******** 2026-01-30 06:21:04.118769 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.118776 | orchestrator | 2026-01-30 06:21:04.118784 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-30 06:21:04.118791 | orchestrator | Friday 30 January 2026 06:20:29 +0000 (0:00:01.122) 0:32:22.809 ******** 2026-01-30 06:21:04.118799 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.118806 | orchestrator | 2026-01-30 06:21:04.118813 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-30 06:21:04.118821 | orchestrator | Friday 30 January 2026 06:20:30 +0000 (0:00:01.162) 0:32:23.971 ******** 2026-01-30 06:21:04.118829 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-01-30 06:21:04.118836 | orchestrator | 2026-01-30 06:21:04.118843 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-30 06:21:04.118850 | orchestrator | Friday 30 January 2026 06:20:31 +0000 (0:00:01.112) 0:32:25.084 ******** 2026-01-30 06:21:04.118858 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.118881 | orchestrator | 2026-01-30 06:21:04.118888 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-30 06:21:04.118895 | orchestrator | Friday 30 January 2026 06:20:33 +0000 (0:00:02.019) 0:32:27.103 ******** 2026-01-30 06:21:04.118903 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.118910 | orchestrator | 2026-01-30 06:21:04.118917 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-30 06:21:04.118926 | orchestrator | Friday 30 January 2026 06:20:35 +0000 (0:00:01.997) 0:32:29.101 ******** 2026-01-30 06:21:04.118933 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.118941 | orchestrator | 2026-01-30 06:21:04.118949 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-30 06:21:04.118957 | orchestrator | Friday 30 January 2026 06:20:37 +0000 (0:00:02.435) 0:32:31.537 ******** 2026-01-30 06:21:04.118965 | orchestrator | changed: [testbed-node-2] 2026-01-30 06:21:04.118973 | orchestrator | 2026-01-30 06:21:04.118981 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-30 06:21:04.118989 | orchestrator | Friday 30 January 2026 06:20:41 +0000 (0:00:03.549) 0:32:35.086 ******** 2026-01-30 06:21:04.118998 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-30 06:21:04.119006 | orchestrator | 2026-01-30 06:21:04.119013 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-30 06:21:04.119030 | orchestrator | Friday 30 January 2026 06:20:42 +0000 (0:00:01.474) 0:32:36.560 ******** 2026-01-30 06:21:04.119040 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:21:04.119048 | orchestrator | 2026-01-30 06:21:04.119057 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-30 06:21:04.119065 | orchestrator | Friday 30 January 2026 06:20:45 +0000 (0:00:02.761) 0:32:39.322 ******** 2026-01-30 06:21:04.119073 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:21:04.119082 | orchestrator | 2026-01-30 06:21:04.119091 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-30 06:21:04.119099 | orchestrator | Friday 30 January 2026 06:20:48 +0000 (0:00:02.757) 0:32:42.079 ******** 2026-01-30 06:21:04.119107 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.119115 | orchestrator | 2026-01-30 06:21:04.119122 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-30 06:21:04.119143 | orchestrator | Friday 30 January 2026 06:20:49 +0000 (0:00:01.353) 0:32:43.433 ******** 2026-01-30 06:21:04.119152 | orchestrator | ok: [testbed-node-2] 2026-01-30 06:21:04.119160 | orchestrator | 2026-01-30 06:21:04.119169 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-30 06:21:04.119178 | orchestrator | Friday 30 January 2026 06:20:50 +0000 (0:00:01.140) 0:32:44.574 ******** 2026-01-30 06:21:04.119186 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-01-30 06:21:04.119195 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-01-30 06:21:04.119203 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.119210 | orchestrator | 2026-01-30 06:21:04.119219 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-30 06:21:04.119226 | orchestrator | Friday 30 January 2026 06:20:52 +0000 (0:00:01.290) 0:32:45.865 ******** 2026-01-30 06:21:04.119234 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-30 06:21:04.119241 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-01-30 06:21:04.119248 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-01-30 06:21:04.119256 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-30 06:21:04.119263 | orchestrator | skipping: [testbed-node-2] 2026-01-30 06:21:04.119271 | orchestrator | 2026-01-30 06:21:04.119278 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-01-30 06:21:04.119285 | orchestrator | 2026-01-30 06:21:04.119292 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:21:04.119308 | orchestrator | Friday 30 January 2026 06:20:54 +0000 (0:00:01.975) 0:32:47.841 ******** 2026-01-30 06:21:04.119316 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:21:04.119323 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:21:04.119329 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:21:04.119336 | orchestrator | 2026-01-30 06:21:04.119344 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:21:04.119351 | orchestrator | Friday 30 January 2026 06:20:55 +0000 (0:00:01.687) 0:32:49.529 ******** 2026-01-30 06:21:04.119358 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:21:04.119365 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:21:04.119372 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:21:04.119379 | orchestrator | 2026-01-30 06:21:04.119386 | orchestrator | TASK [Get pool list] *********************************************************** 2026-01-30 06:21:04.119393 | orchestrator | Friday 30 January 2026 06:20:57 +0000 (0:00:01.591) 0:32:51.120 ******** 2026-01-30 06:21:04.119400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:21:04.119407 | orchestrator | 2026-01-30 06:21:04.119414 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-01-30 06:21:04.119421 | orchestrator | Friday 30 January 2026 06:21:00 +0000 (0:00:03.100) 0:32:54.221 ******** 2026-01-30 06:21:04.119428 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:21:04.119435 | orchestrator | 2026-01-30 06:21:04.119442 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-01-30 06:21:04.119449 | orchestrator | Friday 30 January 2026 06:21:03 +0000 (0:00:02.957) 0:32:57.179 ******** 2026-01-30 06:21:04.119465 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-01-30T03:50:50.822680+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.119486 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-01-30T03:51:55.702582+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '36', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.838369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-01-30T03:51:59.717257+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '95', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '31', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.838452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-01-30T03:52:55.922568+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.838482 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-01-30T03:53:02.081058+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '61', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.838492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-01-30T03:53:08.272990+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:04.838513 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-01-30T03:53:14.560705+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '174', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '63', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:05.329007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-01-30T03:53:20.580996+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '65', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:05.329071 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-01-30T03:53:32.283054+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '67', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '65', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:05.329087 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-01-30T03:54:13.446433+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '81', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 81, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:05.329102 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-01-30T03:54:22.418083+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '88', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 88, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:21:05.329116 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-01-30T03:54:31.931289+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '184', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 184, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:22:42.681152 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-01-30T03:54:40.693544+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '104', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 104, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:22:42.681294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-01-30T03:54:49.685500+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '113', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 113, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-01-30 06:22:42.681336 | orchestrator | 2026-01-30 06:22:42.681351 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-01-30 06:22:42.681364 | orchestrator | Friday 30 January 2026 06:21:06 +0000 (0:00:02.785) 0:32:59.964 ******** 2026-01-30 06:22:42.681393 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:22:42.681404 | orchestrator | 2026-01-30 06:22:42.681415 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-01-30 06:22:42.681436 | orchestrator | Friday 30 January 2026 06:21:09 +0000 (0:00:02.891) 0:33:02.855 ******** 2026-01-30 06:22:42.681447 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-01-30 06:22:42.681462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-01-30 06:22:42.681474 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-01-30 06:22:42.681485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-01-30 06:22:42.681497 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-01-30 06:22:42.681507 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-01-30 06:22:42.681518 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-01-30 06:22:42.681529 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-01-30 06:22:42.681541 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-01-30 06:22:42.681552 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-01-30 06:22:42.681573 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-01-30 06:22:42.681585 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-01-30 06:22:42.681608 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-01-30 06:22:42.681621 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-01-30 06:22:42.681633 | orchestrator | 2026-01-30 06:22:42.681645 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-01-30 06:22:42.681657 | orchestrator | Friday 30 January 2026 06:22:25 +0000 (0:01:16.222) 0:34:19.078 ******** 2026-01-30 06:22:42.681669 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-01-30 06:22:42.681741 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-01-30 06:22:42.681756 | orchestrator | 2026-01-30 06:22:42.681768 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-01-30 06:22:42.681781 | orchestrator | 2026-01-30 06:22:42.681793 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:22:42.681806 | orchestrator | Friday 30 January 2026 06:22:32 +0000 (0:00:07.200) 0:34:26.278 ******** 2026-01-30 06:22:42.681818 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-01-30 06:22:42.681830 | orchestrator | 2026-01-30 06:22:42.681843 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:22:42.681856 | orchestrator | Friday 30 January 2026 06:22:33 +0000 (0:00:01.288) 0:34:27.567 ******** 2026-01-30 06:22:42.681868 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.681882 | orchestrator | 2026-01-30 06:22:42.681894 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:22:42.681905 | orchestrator | Friday 30 January 2026 06:22:35 +0000 (0:00:01.482) 0:34:29.049 ******** 2026-01-30 06:22:42.681916 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.681927 | orchestrator | 2026-01-30 06:22:42.681939 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:22:42.681949 | orchestrator | Friday 30 January 2026 06:22:36 +0000 (0:00:01.107) 0:34:30.156 ******** 2026-01-30 06:22:42.681961 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.681971 | orchestrator | 2026-01-30 06:22:42.681982 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:22:42.681995 | orchestrator | Friday 30 January 2026 06:22:38 +0000 (0:00:01.477) 0:34:31.634 ******** 2026-01-30 06:22:42.682005 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.682086 | orchestrator | 2026-01-30 06:22:42.682102 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:22:42.682114 | orchestrator | Friday 30 January 2026 06:22:39 +0000 (0:00:01.150) 0:34:32.785 ******** 2026-01-30 06:22:42.682125 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.682136 | orchestrator | 2026-01-30 06:22:42.682148 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:22:42.682159 | orchestrator | Friday 30 January 2026 06:22:40 +0000 (0:00:01.159) 0:34:33.945 ******** 2026-01-30 06:22:42.682170 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:22:42.682214 | orchestrator | 2026-01-30 06:22:42.682227 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:22:42.682239 | orchestrator | Friday 30 January 2026 06:22:41 +0000 (0:00:01.167) 0:34:35.112 ******** 2026-01-30 06:22:42.682250 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:22:42.682261 | orchestrator | 2026-01-30 06:22:42.682273 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:22:42.682300 | orchestrator | Friday 30 January 2026 06:22:42 +0000 (0:00:01.164) 0:34:36.277 ******** 2026-01-30 06:23:08.276498 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:08.276585 | orchestrator | 2026-01-30 06:23:08.276593 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:23:08.276601 | orchestrator | Friday 30 January 2026 06:22:43 +0000 (0:00:01.120) 0:34:37.398 ******** 2026-01-30 06:23:08.276607 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:23:08.276631 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:23:08.276637 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:23:08.276642 | orchestrator | 2026-01-30 06:23:08.276648 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:23:08.276654 | orchestrator | Friday 30 January 2026 06:22:45 +0000 (0:00:02.041) 0:34:39.439 ******** 2026-01-30 06:23:08.276659 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:08.276665 | orchestrator | 2026-01-30 06:23:08.276670 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:23:08.276727 | orchestrator | Friday 30 January 2026 06:22:47 +0000 (0:00:01.262) 0:34:40.702 ******** 2026-01-30 06:23:08.276733 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:23:08.276739 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:23:08.276744 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:23:08.276749 | orchestrator | 2026-01-30 06:23:08.276755 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:23:08.276760 | orchestrator | Friday 30 January 2026 06:22:50 +0000 (0:00:03.412) 0:34:44.115 ******** 2026-01-30 06:23:08.276766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:23:08.276772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:23:08.276777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:23:08.276782 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.276788 | orchestrator | 2026-01-30 06:23:08.276796 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:23:08.276818 | orchestrator | Friday 30 January 2026 06:22:52 +0000 (0:00:01.785) 0:34:45.900 ******** 2026-01-30 06:23:08.276829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276841 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276850 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276859 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.276866 | orchestrator | 2026-01-30 06:23:08.276874 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:23:08.276882 | orchestrator | Friday 30 January 2026 06:22:54 +0000 (0:00:02.072) 0:34:47.973 ******** 2026-01-30 06:23:08.276892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:08.276930 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.276938 | orchestrator | 2026-01-30 06:23:08.276947 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:23:08.276955 | orchestrator | Friday 30 January 2026 06:22:55 +0000 (0:00:01.171) 0:34:49.144 ******** 2026-01-30 06:23:08.276981 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:22:47.636612', 'end': '2026-01-30 06:22:47.703038', 'delta': '0:00:00.066426', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:23:08.276994 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:22:48.612755', 'end': '2026-01-30 06:22:48.670374', 'delta': '0:00:00.057619', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:23:08.277009 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:22:49.230964', 'end': '2026-01-30 06:22:49.278861', 'delta': '0:00:00.047897', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:23:08.277019 | orchestrator | 2026-01-30 06:23:08.277027 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:23:08.277036 | orchestrator | Friday 30 January 2026 06:22:56 +0000 (0:00:01.238) 0:34:50.383 ******** 2026-01-30 06:23:08.277045 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:08.277054 | orchestrator | 2026-01-30 06:23:08.277064 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:23:08.277072 | orchestrator | Friday 30 January 2026 06:22:57 +0000 (0:00:01.230) 0:34:51.613 ******** 2026-01-30 06:23:08.277078 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.277085 | orchestrator | 2026-01-30 06:23:08.277091 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:23:08.277100 | orchestrator | Friday 30 January 2026 06:22:59 +0000 (0:00:01.282) 0:34:52.895 ******** 2026-01-30 06:23:08.277109 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:08.277117 | orchestrator | 2026-01-30 06:23:08.277126 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:23:08.277142 | orchestrator | Friday 30 January 2026 06:23:00 +0000 (0:00:01.200) 0:34:54.096 ******** 2026-01-30 06:23:08.277152 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:23:08.277160 | orchestrator | 2026-01-30 06:23:08.277168 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:23:08.277177 | orchestrator | Friday 30 January 2026 06:23:02 +0000 (0:00:02.048) 0:34:56.144 ******** 2026-01-30 06:23:08.277186 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:08.277195 | orchestrator | 2026-01-30 06:23:08.277204 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:23:08.277212 | orchestrator | Friday 30 January 2026 06:23:03 +0000 (0:00:01.121) 0:34:57.266 ******** 2026-01-30 06:23:08.277222 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.277230 | orchestrator | 2026-01-30 06:23:08.277239 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:23:08.277248 | orchestrator | Friday 30 January 2026 06:23:04 +0000 (0:00:01.112) 0:34:58.378 ******** 2026-01-30 06:23:08.277256 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.277265 | orchestrator | 2026-01-30 06:23:08.277273 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:23:08.277282 | orchestrator | Friday 30 January 2026 06:23:05 +0000 (0:00:01.200) 0:34:59.579 ******** 2026-01-30 06:23:08.277290 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.277299 | orchestrator | 2026-01-30 06:23:08.277307 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:23:08.277315 | orchestrator | Friday 30 January 2026 06:23:07 +0000 (0:00:01.103) 0:35:00.683 ******** 2026-01-30 06:23:08.277324 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:08.277332 | orchestrator | 2026-01-30 06:23:08.277347 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:23:14.385469 | orchestrator | Friday 30 January 2026 06:23:08 +0000 (0:00:01.188) 0:35:01.872 ******** 2026-01-30 06:23:14.385611 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:14.385638 | orchestrator | 2026-01-30 06:23:14.385658 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:23:14.385786 | orchestrator | Friday 30 January 2026 06:23:09 +0000 (0:00:01.180) 0:35:03.053 ******** 2026-01-30 06:23:14.385810 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:14.385829 | orchestrator | 2026-01-30 06:23:14.385845 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:23:14.385863 | orchestrator | Friday 30 January 2026 06:23:10 +0000 (0:00:01.190) 0:35:04.243 ******** 2026-01-30 06:23:14.385881 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:14.385900 | orchestrator | 2026-01-30 06:23:14.385920 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:23:14.385938 | orchestrator | Friday 30 January 2026 06:23:11 +0000 (0:00:01.177) 0:35:05.421 ******** 2026-01-30 06:23:14.385957 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:14.385976 | orchestrator | 2026-01-30 06:23:14.385995 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:23:14.386099 | orchestrator | Friday 30 January 2026 06:23:12 +0000 (0:00:01.169) 0:35:06.590 ******** 2026-01-30 06:23:14.386125 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:14.386146 | orchestrator | 2026-01-30 06:23:14.386167 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:23:14.386186 | orchestrator | Friday 30 January 2026 06:23:14 +0000 (0:00:01.173) 0:35:07.764 ******** 2026-01-30 06:23:14.386210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}})  2026-01-30 06:23:14.386320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:23:14.386342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}})  2026-01-30 06:23:14.386362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:23:14.386454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:14.386544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}})  2026-01-30 06:23:14.386564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}})  2026-01-30 06:23:14.386599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:15.753641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:23:15.753857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:15.753890 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:23:15.753904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:23:15.753918 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:15.753931 | orchestrator | 2026-01-30 06:23:15.753943 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:23:15.753956 | orchestrator | Friday 30 January 2026 06:23:15 +0000 (0:00:01.373) 0:35:09.138 ******** 2026-01-30 06:23:15.753988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:15.754003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:15.754101 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:15.754119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:15.754133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:15.754154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921042 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921259 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:16.921284 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:54.637139 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:23:54.637224 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637234 | orchestrator | 2026-01-30 06:23:54.637240 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:23:54.637246 | orchestrator | Friday 30 January 2026 06:23:16 +0000 (0:00:01.379) 0:35:10.518 ******** 2026-01-30 06:23:54.637251 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637257 | orchestrator | 2026-01-30 06:23:54.637262 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:23:54.637267 | orchestrator | Friday 30 January 2026 06:23:18 +0000 (0:00:01.488) 0:35:12.007 ******** 2026-01-30 06:23:54.637272 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637277 | orchestrator | 2026-01-30 06:23:54.637282 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:23:54.637287 | orchestrator | Friday 30 January 2026 06:23:19 +0000 (0:00:01.121) 0:35:13.129 ******** 2026-01-30 06:23:54.637292 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637296 | orchestrator | 2026-01-30 06:23:54.637301 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:23:54.637306 | orchestrator | Friday 30 January 2026 06:23:21 +0000 (0:00:01.499) 0:35:14.628 ******** 2026-01-30 06:23:54.637311 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637315 | orchestrator | 2026-01-30 06:23:54.637320 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:23:54.637325 | orchestrator | Friday 30 January 2026 06:23:22 +0000 (0:00:01.107) 0:35:15.736 ******** 2026-01-30 06:23:54.637330 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637335 | orchestrator | 2026-01-30 06:23:54.637339 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:23:54.637344 | orchestrator | Friday 30 January 2026 06:23:23 +0000 (0:00:01.300) 0:35:17.036 ******** 2026-01-30 06:23:54.637349 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637354 | orchestrator | 2026-01-30 06:23:54.637358 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:23:54.637363 | orchestrator | Friday 30 January 2026 06:23:24 +0000 (0:00:01.088) 0:35:18.125 ******** 2026-01-30 06:23:54.637368 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 06:23:54.637373 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 06:23:54.637378 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 06:23:54.637383 | orchestrator | 2026-01-30 06:23:54.637387 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:23:54.637392 | orchestrator | Friday 30 January 2026 06:23:26 +0000 (0:00:01.989) 0:35:20.114 ******** 2026-01-30 06:23:54.637397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:23:54.637418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:23:54.637424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:23:54.637428 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637433 | orchestrator | 2026-01-30 06:23:54.637438 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:23:54.637443 | orchestrator | Friday 30 January 2026 06:23:27 +0000 (0:00:01.164) 0:35:21.279 ******** 2026-01-30 06:23:54.637447 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-01-30 06:23:54.637453 | orchestrator | 2026-01-30 06:23:54.637458 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:23:54.637464 | orchestrator | Friday 30 January 2026 06:23:28 +0000 (0:00:01.122) 0:35:22.402 ******** 2026-01-30 06:23:54.637468 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637473 | orchestrator | 2026-01-30 06:23:54.637478 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:23:54.637483 | orchestrator | Friday 30 January 2026 06:23:29 +0000 (0:00:01.122) 0:35:23.524 ******** 2026-01-30 06:23:54.637487 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637492 | orchestrator | 2026-01-30 06:23:54.637497 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:23:54.637502 | orchestrator | Friday 30 January 2026 06:23:31 +0000 (0:00:01.143) 0:35:24.667 ******** 2026-01-30 06:23:54.637506 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637523 | orchestrator | 2026-01-30 06:23:54.637528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:23:54.637533 | orchestrator | Friday 30 January 2026 06:23:32 +0000 (0:00:01.129) 0:35:25.797 ******** 2026-01-30 06:23:54.637544 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637549 | orchestrator | 2026-01-30 06:23:54.637554 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:23:54.637559 | orchestrator | Friday 30 January 2026 06:23:33 +0000 (0:00:01.236) 0:35:27.034 ******** 2026-01-30 06:23:54.637563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:23:54.637579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:23:54.637585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:23:54.637589 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637594 | orchestrator | 2026-01-30 06:23:54.637599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:23:54.637603 | orchestrator | Friday 30 January 2026 06:23:34 +0000 (0:00:01.361) 0:35:28.395 ******** 2026-01-30 06:23:54.637608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:23:54.637613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:23:54.637618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:23:54.637622 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637627 | orchestrator | 2026-01-30 06:23:54.637636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:23:54.637641 | orchestrator | Friday 30 January 2026 06:23:36 +0000 (0:00:01.375) 0:35:29.771 ******** 2026-01-30 06:23:54.637645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:23:54.637650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:23:54.637655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:23:54.637660 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:23:54.637664 | orchestrator | 2026-01-30 06:23:54.637703 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:23:54.637710 | orchestrator | Friday 30 January 2026 06:23:37 +0000 (0:00:01.428) 0:35:31.199 ******** 2026-01-30 06:23:54.637716 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637726 | orchestrator | 2026-01-30 06:23:54.637732 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:23:54.637737 | orchestrator | Friday 30 January 2026 06:23:38 +0000 (0:00:01.147) 0:35:32.347 ******** 2026-01-30 06:23:54.637743 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:23:54.637748 | orchestrator | 2026-01-30 06:23:54.637753 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:23:54.637759 | orchestrator | Friday 30 January 2026 06:23:40 +0000 (0:00:01.355) 0:35:33.702 ******** 2026-01-30 06:23:54.637764 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:23:54.637770 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:23:54.637776 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:23:54.637781 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:23:54.637787 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:23:54.637796 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:23:54.637804 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:23:54.637813 | orchestrator | 2026-01-30 06:23:54.637822 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:23:54.637830 | orchestrator | Friday 30 January 2026 06:23:42 +0000 (0:00:02.207) 0:35:35.910 ******** 2026-01-30 06:23:54.637838 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:23:54.637847 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:23:54.637855 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:23:54.637864 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:23:54.637872 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:23:54.637881 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:23:54.637889 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:23:54.637906 | orchestrator | 2026-01-30 06:23:54.637913 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-01-30 06:23:54.637928 | orchestrator | Friday 30 January 2026 06:23:45 +0000 (0:00:03.070) 0:35:38.981 ******** 2026-01-30 06:23:54.637936 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637943 | orchestrator | 2026-01-30 06:23:54.637951 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-01-30 06:23:54.637958 | orchestrator | Friday 30 January 2026 06:23:46 +0000 (0:00:01.447) 0:35:40.428 ******** 2026-01-30 06:23:54.637966 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.637973 | orchestrator | 2026-01-30 06:23:54.637981 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-01-30 06:23:54.637989 | orchestrator | Friday 30 January 2026 06:23:47 +0000 (0:00:01.115) 0:35:41.544 ******** 2026-01-30 06:23:54.637997 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:23:54.638005 | orchestrator | 2026-01-30 06:23:54.638065 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-01-30 06:23:54.638072 | orchestrator | Friday 30 January 2026 06:23:49 +0000 (0:00:01.293) 0:35:42.837 ******** 2026-01-30 06:23:54.638077 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-30 06:23:54.638082 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-30 06:23:54.638106 | orchestrator | 2026-01-30 06:23:54.638111 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:23:54.638116 | orchestrator | Friday 30 January 2026 06:23:53 +0000 (0:00:04.272) 0:35:47.110 ******** 2026-01-30 06:23:54.638121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-01-30 06:23:54.638133 | orchestrator | 2026-01-30 06:23:54.638138 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:23:54.638150 | orchestrator | Friday 30 January 2026 06:23:54 +0000 (0:00:01.123) 0:35:48.234 ******** 2026-01-30 06:24:45.262741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-01-30 06:24:45.262837 | orchestrator | 2026-01-30 06:24:45.262847 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:24:45.262855 | orchestrator | Friday 30 January 2026 06:23:55 +0000 (0:00:01.110) 0:35:49.344 ******** 2026-01-30 06:24:45.262861 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.262868 | orchestrator | 2026-01-30 06:24:45.262875 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:24:45.262881 | orchestrator | Friday 30 January 2026 06:23:56 +0000 (0:00:01.138) 0:35:50.483 ******** 2026-01-30 06:24:45.262888 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.262895 | orchestrator | 2026-01-30 06:24:45.262913 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:24:45.262920 | orchestrator | Friday 30 January 2026 06:23:58 +0000 (0:00:01.548) 0:35:52.032 ******** 2026-01-30 06:24:45.262926 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.262932 | orchestrator | 2026-01-30 06:24:45.262938 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:24:45.262945 | orchestrator | Friday 30 January 2026 06:23:59 +0000 (0:00:01.507) 0:35:53.539 ******** 2026-01-30 06:24:45.262951 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.262957 | orchestrator | 2026-01-30 06:24:45.262963 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:24:45.262970 | orchestrator | Friday 30 January 2026 06:24:01 +0000 (0:00:01.669) 0:35:55.208 ******** 2026-01-30 06:24:45.262976 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.262982 | orchestrator | 2026-01-30 06:24:45.262988 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:24:45.262994 | orchestrator | Friday 30 January 2026 06:24:02 +0000 (0:00:01.120) 0:35:56.329 ******** 2026-01-30 06:24:45.263000 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263007 | orchestrator | 2026-01-30 06:24:45.263013 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:24:45.263019 | orchestrator | Friday 30 January 2026 06:24:03 +0000 (0:00:01.145) 0:35:57.474 ******** 2026-01-30 06:24:45.263025 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263031 | orchestrator | 2026-01-30 06:24:45.263037 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:24:45.263044 | orchestrator | Friday 30 January 2026 06:24:04 +0000 (0:00:01.107) 0:35:58.582 ******** 2026-01-30 06:24:45.263050 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263056 | orchestrator | 2026-01-30 06:24:45.263062 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:24:45.263068 | orchestrator | Friday 30 January 2026 06:24:06 +0000 (0:00:01.540) 0:36:00.123 ******** 2026-01-30 06:24:45.263074 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263080 | orchestrator | 2026-01-30 06:24:45.263087 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:24:45.263093 | orchestrator | Friday 30 January 2026 06:24:08 +0000 (0:00:01.557) 0:36:01.680 ******** 2026-01-30 06:24:45.263099 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263105 | orchestrator | 2026-01-30 06:24:45.263111 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:24:45.263118 | orchestrator | Friday 30 January 2026 06:24:09 +0000 (0:00:01.171) 0:36:02.852 ******** 2026-01-30 06:24:45.263124 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263130 | orchestrator | 2026-01-30 06:24:45.263136 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:24:45.263143 | orchestrator | Friday 30 January 2026 06:24:10 +0000 (0:00:01.141) 0:36:03.993 ******** 2026-01-30 06:24:45.263165 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263172 | orchestrator | 2026-01-30 06:24:45.263178 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:24:45.263184 | orchestrator | Friday 30 January 2026 06:24:11 +0000 (0:00:01.128) 0:36:05.122 ******** 2026-01-30 06:24:45.263190 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263196 | orchestrator | 2026-01-30 06:24:45.263202 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:24:45.263209 | orchestrator | Friday 30 January 2026 06:24:12 +0000 (0:00:01.115) 0:36:06.238 ******** 2026-01-30 06:24:45.263215 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263221 | orchestrator | 2026-01-30 06:24:45.263227 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:24:45.263233 | orchestrator | Friday 30 January 2026 06:24:13 +0000 (0:00:01.138) 0:36:07.376 ******** 2026-01-30 06:24:45.263239 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263245 | orchestrator | 2026-01-30 06:24:45.263252 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:24:45.263258 | orchestrator | Friday 30 January 2026 06:24:14 +0000 (0:00:01.127) 0:36:08.504 ******** 2026-01-30 06:24:45.263264 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263270 | orchestrator | 2026-01-30 06:24:45.263277 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:24:45.263285 | orchestrator | Friday 30 January 2026 06:24:16 +0000 (0:00:01.150) 0:36:09.655 ******** 2026-01-30 06:24:45.263292 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263299 | orchestrator | 2026-01-30 06:24:45.263306 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:24:45.263313 | orchestrator | Friday 30 January 2026 06:24:17 +0000 (0:00:01.134) 0:36:10.789 ******** 2026-01-30 06:24:45.263320 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263327 | orchestrator | 2026-01-30 06:24:45.263334 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:24:45.263341 | orchestrator | Friday 30 January 2026 06:24:18 +0000 (0:00:01.154) 0:36:11.943 ******** 2026-01-30 06:24:45.263348 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263356 | orchestrator | 2026-01-30 06:24:45.263363 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:24:45.263370 | orchestrator | Friday 30 January 2026 06:24:19 +0000 (0:00:01.146) 0:36:13.090 ******** 2026-01-30 06:24:45.263377 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263384 | orchestrator | 2026-01-30 06:24:45.263405 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:24:45.263413 | orchestrator | Friday 30 January 2026 06:24:20 +0000 (0:00:01.151) 0:36:14.242 ******** 2026-01-30 06:24:45.263420 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263427 | orchestrator | 2026-01-30 06:24:45.263434 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:24:45.263441 | orchestrator | Friday 30 January 2026 06:24:21 +0000 (0:00:01.115) 0:36:15.357 ******** 2026-01-30 06:24:45.263448 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263455 | orchestrator | 2026-01-30 06:24:45.263462 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:24:45.263469 | orchestrator | Friday 30 January 2026 06:24:22 +0000 (0:00:01.156) 0:36:16.513 ******** 2026-01-30 06:24:45.263479 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263486 | orchestrator | 2026-01-30 06:24:45.263494 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:24:45.263501 | orchestrator | Friday 30 January 2026 06:24:24 +0000 (0:00:01.174) 0:36:17.688 ******** 2026-01-30 06:24:45.263508 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263515 | orchestrator | 2026-01-30 06:24:45.263523 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:24:45.263530 | orchestrator | Friday 30 January 2026 06:24:25 +0000 (0:00:01.093) 0:36:18.781 ******** 2026-01-30 06:24:45.263545 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263552 | orchestrator | 2026-01-30 06:24:45.263559 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:24:45.263566 | orchestrator | Friday 30 January 2026 06:24:26 +0000 (0:00:01.145) 0:36:19.927 ******** 2026-01-30 06:24:45.263573 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263580 | orchestrator | 2026-01-30 06:24:45.263586 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:24:45.263593 | orchestrator | Friday 30 January 2026 06:24:27 +0000 (0:00:01.123) 0:36:21.051 ******** 2026-01-30 06:24:45.263599 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263606 | orchestrator | 2026-01-30 06:24:45.263612 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:24:45.263618 | orchestrator | Friday 30 January 2026 06:24:28 +0000 (0:00:01.116) 0:36:22.167 ******** 2026-01-30 06:24:45.263624 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263630 | orchestrator | 2026-01-30 06:24:45.263636 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:24:45.263642 | orchestrator | Friday 30 January 2026 06:24:29 +0000 (0:00:01.108) 0:36:23.276 ******** 2026-01-30 06:24:45.263648 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263654 | orchestrator | 2026-01-30 06:24:45.263707 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:24:45.263716 | orchestrator | Friday 30 January 2026 06:24:30 +0000 (0:00:01.159) 0:36:24.436 ******** 2026-01-30 06:24:45.263723 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263729 | orchestrator | 2026-01-30 06:24:45.263735 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:24:45.263741 | orchestrator | Friday 30 January 2026 06:24:32 +0000 (0:00:01.189) 0:36:25.626 ******** 2026-01-30 06:24:45.263747 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263753 | orchestrator | 2026-01-30 06:24:45.263760 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:24:45.263766 | orchestrator | Friday 30 January 2026 06:24:33 +0000 (0:00:01.142) 0:36:26.768 ******** 2026-01-30 06:24:45.263772 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263778 | orchestrator | 2026-01-30 06:24:45.263784 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:24:45.263790 | orchestrator | Friday 30 January 2026 06:24:35 +0000 (0:00:01.940) 0:36:28.709 ******** 2026-01-30 06:24:45.263796 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263803 | orchestrator | 2026-01-30 06:24:45.263809 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:24:45.263815 | orchestrator | Friday 30 January 2026 06:24:37 +0000 (0:00:02.226) 0:36:30.935 ******** 2026-01-30 06:24:45.263821 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-01-30 06:24:45.263827 | orchestrator | 2026-01-30 06:24:45.263833 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:24:45.263840 | orchestrator | Friday 30 January 2026 06:24:38 +0000 (0:00:01.176) 0:36:32.111 ******** 2026-01-30 06:24:45.263846 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263852 | orchestrator | 2026-01-30 06:24:45.263858 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:24:45.263864 | orchestrator | Friday 30 January 2026 06:24:39 +0000 (0:00:01.136) 0:36:33.248 ******** 2026-01-30 06:24:45.263870 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263877 | orchestrator | 2026-01-30 06:24:45.263883 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:24:45.263889 | orchestrator | Friday 30 January 2026 06:24:40 +0000 (0:00:01.160) 0:36:34.409 ******** 2026-01-30 06:24:45.263895 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:24:45.263901 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:24:45.263913 | orchestrator | 2026-01-30 06:24:45.263919 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:24:45.263925 | orchestrator | Friday 30 January 2026 06:24:42 +0000 (0:00:01.848) 0:36:36.258 ******** 2026-01-30 06:24:45.263931 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:24:45.263937 | orchestrator | 2026-01-30 06:24:45.263943 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:24:45.263949 | orchestrator | Friday 30 January 2026 06:24:44 +0000 (0:00:01.455) 0:36:37.713 ******** 2026-01-30 06:24:45.263956 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:24:45.263962 | orchestrator | 2026-01-30 06:24:45.263968 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:24:45.263978 | orchestrator | Friday 30 January 2026 06:24:45 +0000 (0:00:01.147) 0:36:38.860 ******** 2026-01-30 06:25:31.542088 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542208 | orchestrator | 2026-01-30 06:25:31.542227 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:25:31.542240 | orchestrator | Friday 30 January 2026 06:24:46 +0000 (0:00:01.163) 0:36:40.024 ******** 2026-01-30 06:25:31.542252 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542263 | orchestrator | 2026-01-30 06:25:31.542275 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:25:31.542286 | orchestrator | Friday 30 January 2026 06:24:47 +0000 (0:00:01.194) 0:36:41.219 ******** 2026-01-30 06:25:31.542297 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-01-30 06:25:31.542309 | orchestrator | 2026-01-30 06:25:31.542371 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:25:31.542385 | orchestrator | Friday 30 January 2026 06:24:48 +0000 (0:00:01.126) 0:36:42.345 ******** 2026-01-30 06:25:31.542396 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:25:31.542408 | orchestrator | 2026-01-30 06:25:31.542420 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:25:31.542441 | orchestrator | Friday 30 January 2026 06:24:50 +0000 (0:00:01.729) 0:36:44.075 ******** 2026-01-30 06:25:31.542461 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:25:31.542482 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:25:31.542500 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:25:31.542519 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542538 | orchestrator | 2026-01-30 06:25:31.542556 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:25:31.542577 | orchestrator | Friday 30 January 2026 06:24:51 +0000 (0:00:01.163) 0:36:45.239 ******** 2026-01-30 06:25:31.542598 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542619 | orchestrator | 2026-01-30 06:25:31.542641 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:25:31.542735 | orchestrator | Friday 30 January 2026 06:24:52 +0000 (0:00:01.121) 0:36:46.360 ******** 2026-01-30 06:25:31.542757 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542777 | orchestrator | 2026-01-30 06:25:31.542796 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:25:31.542817 | orchestrator | Friday 30 January 2026 06:24:53 +0000 (0:00:01.240) 0:36:47.600 ******** 2026-01-30 06:25:31.542838 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542856 | orchestrator | 2026-01-30 06:25:31.542877 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:25:31.542899 | orchestrator | Friday 30 January 2026 06:24:55 +0000 (0:00:01.142) 0:36:48.743 ******** 2026-01-30 06:25:31.542920 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.542942 | orchestrator | 2026-01-30 06:25:31.542960 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:25:31.542979 | orchestrator | Friday 30 January 2026 06:24:56 +0000 (0:00:01.154) 0:36:49.897 ******** 2026-01-30 06:25:31.543030 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543051 | orchestrator | 2026-01-30 06:25:31.543070 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:25:31.543088 | orchestrator | Friday 30 January 2026 06:24:57 +0000 (0:00:01.143) 0:36:51.041 ******** 2026-01-30 06:25:31.543107 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:25:31.543127 | orchestrator | 2026-01-30 06:25:31.543146 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:25:31.543165 | orchestrator | Friday 30 January 2026 06:24:59 +0000 (0:00:02.551) 0:36:53.592 ******** 2026-01-30 06:25:31.543183 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:25:31.543203 | orchestrator | 2026-01-30 06:25:31.543221 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:25:31.543240 | orchestrator | Friday 30 January 2026 06:25:01 +0000 (0:00:01.161) 0:36:54.754 ******** 2026-01-30 06:25:31.543260 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-01-30 06:25:31.543281 | orchestrator | 2026-01-30 06:25:31.543418 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:25:31.543448 | orchestrator | Friday 30 January 2026 06:25:02 +0000 (0:00:01.291) 0:36:56.046 ******** 2026-01-30 06:25:31.543467 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543485 | orchestrator | 2026-01-30 06:25:31.543504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:25:31.543523 | orchestrator | Friday 30 January 2026 06:25:03 +0000 (0:00:01.119) 0:36:57.165 ******** 2026-01-30 06:25:31.543542 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543561 | orchestrator | 2026-01-30 06:25:31.543580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:25:31.543599 | orchestrator | Friday 30 January 2026 06:25:04 +0000 (0:00:01.179) 0:36:58.344 ******** 2026-01-30 06:25:31.543616 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543635 | orchestrator | 2026-01-30 06:25:31.543653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:25:31.543706 | orchestrator | Friday 30 January 2026 06:25:05 +0000 (0:00:01.141) 0:36:59.486 ******** 2026-01-30 06:25:31.543726 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543745 | orchestrator | 2026-01-30 06:25:31.543763 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:25:31.543779 | orchestrator | Friday 30 January 2026 06:25:06 +0000 (0:00:01.111) 0:37:00.598 ******** 2026-01-30 06:25:31.543796 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543815 | orchestrator | 2026-01-30 06:25:31.543832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:25:31.543851 | orchestrator | Friday 30 January 2026 06:25:07 +0000 (0:00:00.986) 0:37:01.584 ******** 2026-01-30 06:25:31.543869 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543888 | orchestrator | 2026-01-30 06:25:31.543937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:25:31.543958 | orchestrator | Friday 30 January 2026 06:25:08 +0000 (0:00:00.912) 0:37:02.497 ******** 2026-01-30 06:25:31.543975 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.543992 | orchestrator | 2026-01-30 06:25:31.544009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:25:31.544027 | orchestrator | Friday 30 January 2026 06:25:09 +0000 (0:00:00.963) 0:37:03.461 ******** 2026-01-30 06:25:31.544045 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544062 | orchestrator | 2026-01-30 06:25:31.544081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:25:31.544100 | orchestrator | Friday 30 January 2026 06:25:10 +0000 (0:00:00.897) 0:37:04.359 ******** 2026-01-30 06:25:31.544134 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:25:31.544154 | orchestrator | 2026-01-30 06:25:31.544169 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:25:31.544197 | orchestrator | Friday 30 January 2026 06:25:11 +0000 (0:00:01.006) 0:37:05.366 ******** 2026-01-30 06:25:31.544208 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-01-30 06:25:31.544220 | orchestrator | 2026-01-30 06:25:31.544231 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:25:31.544241 | orchestrator | Friday 30 January 2026 06:25:12 +0000 (0:00:01.066) 0:37:06.432 ******** 2026-01-30 06:25:31.544252 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-01-30 06:25:31.544264 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-30 06:25:31.544275 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-30 06:25:31.544286 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-30 06:25:31.544296 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-30 06:25:31.544307 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-30 06:25:31.544318 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-30 06:25:31.544328 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:25:31.544339 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:25:31.544350 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:25:31.544361 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:25:31.544372 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:25:31.544382 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:25:31.544394 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:25:31.544404 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-01-30 06:25:31.544415 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-01-30 06:25:31.544426 | orchestrator | 2026-01-30 06:25:31.544436 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:25:31.544447 | orchestrator | Friday 30 January 2026 06:25:19 +0000 (0:00:06.768) 0:37:13.201 ******** 2026-01-30 06:25:31.544458 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-01-30 06:25:31.544468 | orchestrator | 2026-01-30 06:25:31.544479 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:25:31.544490 | orchestrator | Friday 30 January 2026 06:25:21 +0000 (0:00:01.578) 0:37:14.780 ******** 2026-01-30 06:25:31.544501 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:25:31.544513 | orchestrator | 2026-01-30 06:25:31.544524 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:25:31.544534 | orchestrator | Friday 30 January 2026 06:25:22 +0000 (0:00:01.471) 0:37:16.251 ******** 2026-01-30 06:25:31.544545 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:25:31.544556 | orchestrator | 2026-01-30 06:25:31.544567 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:25:31.544577 | orchestrator | Friday 30 January 2026 06:25:24 +0000 (0:00:01.990) 0:37:18.242 ******** 2026-01-30 06:25:31.544588 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544599 | orchestrator | 2026-01-30 06:25:31.544609 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:25:31.544620 | orchestrator | Friday 30 January 2026 06:25:25 +0000 (0:00:01.165) 0:37:19.407 ******** 2026-01-30 06:25:31.544631 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544641 | orchestrator | 2026-01-30 06:25:31.544652 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:25:31.544718 | orchestrator | Friday 30 January 2026 06:25:26 +0000 (0:00:01.133) 0:37:20.541 ******** 2026-01-30 06:25:31.544738 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544748 | orchestrator | 2026-01-30 06:25:31.544759 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:25:31.544770 | orchestrator | Friday 30 January 2026 06:25:28 +0000 (0:00:01.149) 0:37:21.690 ******** 2026-01-30 06:25:31.544781 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544792 | orchestrator | 2026-01-30 06:25:31.544803 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:25:31.544813 | orchestrator | Friday 30 January 2026 06:25:29 +0000 (0:00:01.094) 0:37:22.785 ******** 2026-01-30 06:25:31.544824 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544835 | orchestrator | 2026-01-30 06:25:31.544845 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:25:31.544856 | orchestrator | Friday 30 January 2026 06:25:30 +0000 (0:00:01.218) 0:37:24.004 ******** 2026-01-30 06:25:31.544867 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:25:31.544877 | orchestrator | 2026-01-30 06:25:31.544901 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:26:22.213909 | orchestrator | Friday 30 January 2026 06:25:31 +0000 (0:00:01.134) 0:37:25.138 ******** 2026-01-30 06:26:22.213995 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214008 | orchestrator | 2026-01-30 06:26:22.214058 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:26:22.214068 | orchestrator | Friday 30 January 2026 06:25:32 +0000 (0:00:01.115) 0:37:26.253 ******** 2026-01-30 06:26:22.214075 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214081 | orchestrator | 2026-01-30 06:26:22.214102 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:26:22.214109 | orchestrator | Friday 30 January 2026 06:25:33 +0000 (0:00:01.137) 0:37:27.391 ******** 2026-01-30 06:26:22.214117 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214124 | orchestrator | 2026-01-30 06:26:22.214131 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:26:22.214137 | orchestrator | Friday 30 January 2026 06:25:34 +0000 (0:00:01.111) 0:37:28.503 ******** 2026-01-30 06:26:22.214144 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214151 | orchestrator | 2026-01-30 06:26:22.214158 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:26:22.214165 | orchestrator | Friday 30 January 2026 06:25:36 +0000 (0:00:01.287) 0:37:29.790 ******** 2026-01-30 06:26:22.214171 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214180 | orchestrator | 2026-01-30 06:26:22.214184 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:26:22.214188 | orchestrator | Friday 30 January 2026 06:25:37 +0000 (0:00:01.183) 0:37:30.974 ******** 2026-01-30 06:26:22.214193 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:26:22.214196 | orchestrator | 2026-01-30 06:26:22.214200 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:26:22.214204 | orchestrator | Friday 30 January 2026 06:25:41 +0000 (0:00:04.342) 0:37:35.316 ******** 2026-01-30 06:26:22.214208 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:26:22.214213 | orchestrator | 2026-01-30 06:26:22.214217 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:26:22.214221 | orchestrator | Friday 30 January 2026 06:25:42 +0000 (0:00:01.177) 0:37:36.493 ******** 2026-01-30 06:26:22.214226 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-01-30 06:26:22.214249 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-01-30 06:26:22.214254 | orchestrator | 2026-01-30 06:26:22.214258 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:26:22.214262 | orchestrator | Friday 30 January 2026 06:25:50 +0000 (0:00:07.719) 0:37:44.213 ******** 2026-01-30 06:26:22.214265 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214269 | orchestrator | 2026-01-30 06:26:22.214273 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:26:22.214277 | orchestrator | Friday 30 January 2026 06:25:51 +0000 (0:00:01.069) 0:37:45.283 ******** 2026-01-30 06:26:22.214280 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214284 | orchestrator | 2026-01-30 06:26:22.214288 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:26:22.214292 | orchestrator | Friday 30 January 2026 06:25:52 +0000 (0:00:01.107) 0:37:46.390 ******** 2026-01-30 06:26:22.214296 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214300 | orchestrator | 2026-01-30 06:26:22.214303 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:26:22.214307 | orchestrator | Friday 30 January 2026 06:25:53 +0000 (0:00:01.207) 0:37:47.598 ******** 2026-01-30 06:26:22.214311 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214315 | orchestrator | 2026-01-30 06:26:22.214318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:26:22.214322 | orchestrator | Friday 30 January 2026 06:25:55 +0000 (0:00:01.137) 0:37:48.735 ******** 2026-01-30 06:26:22.214326 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214330 | orchestrator | 2026-01-30 06:26:22.214333 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:26:22.214337 | orchestrator | Friday 30 January 2026 06:25:56 +0000 (0:00:01.171) 0:37:49.907 ******** 2026-01-30 06:26:22.214341 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214345 | orchestrator | 2026-01-30 06:26:22.214348 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:26:22.214352 | orchestrator | Friday 30 January 2026 06:25:57 +0000 (0:00:01.219) 0:37:51.127 ******** 2026-01-30 06:26:22.214356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:26:22.214361 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:26:22.214364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:26:22.214368 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214372 | orchestrator | 2026-01-30 06:26:22.214376 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:26:22.214392 | orchestrator | Friday 30 January 2026 06:25:59 +0000 (0:00:01.821) 0:37:52.949 ******** 2026-01-30 06:26:22.214396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:26:22.214400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:26:22.214403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:26:22.214407 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214411 | orchestrator | 2026-01-30 06:26:22.214415 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:26:22.214419 | orchestrator | Friday 30 January 2026 06:26:01 +0000 (0:00:01.757) 0:37:54.707 ******** 2026-01-30 06:26:22.214423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:26:22.214427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:26:22.214431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:26:22.214435 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214442 | orchestrator | 2026-01-30 06:26:22.214446 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:26:22.214450 | orchestrator | Friday 30 January 2026 06:26:02 +0000 (0:00:01.870) 0:37:56.577 ******** 2026-01-30 06:26:22.214453 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214457 | orchestrator | 2026-01-30 06:26:22.214461 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:26:22.214465 | orchestrator | Friday 30 January 2026 06:26:04 +0000 (0:00:01.154) 0:37:57.731 ******** 2026-01-30 06:26:22.214469 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:26:22.214472 | orchestrator | 2026-01-30 06:26:22.214476 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:26:22.214480 | orchestrator | Friday 30 January 2026 06:26:05 +0000 (0:00:01.377) 0:37:59.109 ******** 2026-01-30 06:26:22.214484 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214488 | orchestrator | 2026-01-30 06:26:22.214491 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-30 06:26:22.214495 | orchestrator | Friday 30 January 2026 06:26:07 +0000 (0:00:01.809) 0:38:00.919 ******** 2026-01-30 06:26:22.214499 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214503 | orchestrator | 2026-01-30 06:26:22.214506 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:26:22.214510 | orchestrator | Friday 30 January 2026 06:26:08 +0000 (0:00:01.184) 0:38:02.104 ******** 2026-01-30 06:26:22.214514 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:26:22.214518 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:26:22.214522 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:26:22.214526 | orchestrator | 2026-01-30 06:26:22.214529 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-30 06:26:22.214533 | orchestrator | Friday 30 January 2026 06:26:10 +0000 (0:00:01.688) 0:38:03.792 ******** 2026-01-30 06:26:22.214537 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-01-30 06:26:22.214541 | orchestrator | 2026-01-30 06:26:22.214545 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-30 06:26:22.214548 | orchestrator | Friday 30 January 2026 06:26:11 +0000 (0:00:01.500) 0:38:05.293 ******** 2026-01-30 06:26:22.214552 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214556 | orchestrator | 2026-01-30 06:26:22.214560 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-30 06:26:22.214563 | orchestrator | Friday 30 January 2026 06:26:12 +0000 (0:00:01.124) 0:38:06.418 ******** 2026-01-30 06:26:22.214567 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214571 | orchestrator | 2026-01-30 06:26:22.214575 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-30 06:26:22.214579 | orchestrator | Friday 30 January 2026 06:26:13 +0000 (0:00:01.118) 0:38:07.536 ******** 2026-01-30 06:26:22.214583 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214586 | orchestrator | 2026-01-30 06:26:22.214590 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-30 06:26:22.214594 | orchestrator | Friday 30 January 2026 06:26:15 +0000 (0:00:01.467) 0:38:09.004 ******** 2026-01-30 06:26:22.214598 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:26:22.214602 | orchestrator | 2026-01-30 06:26:22.214605 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-30 06:26:22.214609 | orchestrator | Friday 30 January 2026 06:26:16 +0000 (0:00:01.175) 0:38:10.180 ******** 2026-01-30 06:26:22.214613 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 06:26:22.214617 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 06:26:22.214621 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 06:26:22.214629 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 06:26:22.214633 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 06:26:22.214636 | orchestrator | 2026-01-30 06:26:22.214640 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-30 06:26:22.214644 | orchestrator | Friday 30 January 2026 06:26:19 +0000 (0:00:03.011) 0:38:13.191 ******** 2026-01-30 06:26:22.214671 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:26:22.214675 | orchestrator | 2026-01-30 06:26:22.214679 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-30 06:26:22.214682 | orchestrator | Friday 30 January 2026 06:26:20 +0000 (0:00:01.160) 0:38:14.352 ******** 2026-01-30 06:26:22.214686 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-01-30 06:26:22.214690 | orchestrator | 2026-01-30 06:26:22.214694 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-30 06:27:27.344926 | orchestrator | Friday 30 January 2026 06:26:22 +0000 (0:00:01.457) 0:38:15.810 ******** 2026-01-30 06:27:27.345058 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 06:27:27.345068 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-30 06:27:27.345072 | orchestrator | 2026-01-30 06:27:27.345077 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-30 06:27:27.345081 | orchestrator | Friday 30 January 2026 06:26:24 +0000 (0:00:01.876) 0:38:17.686 ******** 2026-01-30 06:27:27.345088 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:27:27.345092 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:27:27.345096 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:27:27.345100 | orchestrator | 2026-01-30 06:27:27.345104 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:27:27.345107 | orchestrator | Friday 30 January 2026 06:26:27 +0000 (0:00:03.265) 0:38:20.951 ******** 2026-01-30 06:27:27.345111 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-01-30 06:27:27.345116 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:27:27.345120 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345123 | orchestrator | 2026-01-30 06:27:27.345127 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-30 06:27:27.345131 | orchestrator | Friday 30 January 2026 06:26:29 +0000 (0:00:01.947) 0:38:22.899 ******** 2026-01-30 06:27:27.345135 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345139 | orchestrator | 2026-01-30 06:27:27.345142 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-30 06:27:27.345147 | orchestrator | Friday 30 January 2026 06:26:30 +0000 (0:00:01.220) 0:38:24.120 ******** 2026-01-30 06:27:27.345151 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345154 | orchestrator | 2026-01-30 06:27:27.345159 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-30 06:27:27.345162 | orchestrator | Friday 30 January 2026 06:26:31 +0000 (0:00:01.121) 0:38:25.241 ******** 2026-01-30 06:27:27.345166 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345170 | orchestrator | 2026-01-30 06:27:27.345174 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-30 06:27:27.345177 | orchestrator | Friday 30 January 2026 06:26:32 +0000 (0:00:01.098) 0:38:26.339 ******** 2026-01-30 06:27:27.345181 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-01-30 06:27:27.345186 | orchestrator | 2026-01-30 06:27:27.345190 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-30 06:27:27.345193 | orchestrator | Friday 30 January 2026 06:26:34 +0000 (0:00:01.493) 0:38:27.833 ******** 2026-01-30 06:27:27.345197 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345201 | orchestrator | 2026-01-30 06:27:27.345205 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-30 06:27:27.345223 | orchestrator | Friday 30 January 2026 06:26:35 +0000 (0:00:01.492) 0:38:29.326 ******** 2026-01-30 06:27:27.345227 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345231 | orchestrator | 2026-01-30 06:27:27.345234 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-30 06:27:27.345238 | orchestrator | Friday 30 January 2026 06:26:39 +0000 (0:00:03.650) 0:38:32.976 ******** 2026-01-30 06:27:27.345242 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-01-30 06:27:27.345246 | orchestrator | 2026-01-30 06:27:27.345249 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-30 06:27:27.345253 | orchestrator | Friday 30 January 2026 06:26:40 +0000 (0:00:01.453) 0:38:34.430 ******** 2026-01-30 06:27:27.345257 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345260 | orchestrator | 2026-01-30 06:27:27.345264 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-30 06:27:27.345268 | orchestrator | Friday 30 January 2026 06:26:42 +0000 (0:00:01.962) 0:38:36.393 ******** 2026-01-30 06:27:27.345272 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345275 | orchestrator | 2026-01-30 06:27:27.345279 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-30 06:27:27.345283 | orchestrator | Friday 30 January 2026 06:26:44 +0000 (0:00:01.917) 0:38:38.311 ******** 2026-01-30 06:27:27.345287 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:27:27.345290 | orchestrator | 2026-01-30 06:27:27.345294 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-30 06:27:27.345298 | orchestrator | Friday 30 January 2026 06:26:46 +0000 (0:00:02.228) 0:38:40.539 ******** 2026-01-30 06:27:27.345302 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345305 | orchestrator | 2026-01-30 06:27:27.345309 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-30 06:27:27.345313 | orchestrator | Friday 30 January 2026 06:26:48 +0000 (0:00:01.148) 0:38:41.688 ******** 2026-01-30 06:27:27.345316 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345320 | orchestrator | 2026-01-30 06:27:27.345324 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-30 06:27:27.345328 | orchestrator | Friday 30 January 2026 06:26:49 +0000 (0:00:01.131) 0:38:42.819 ******** 2026-01-30 06:27:27.345331 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:27:27.345335 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-30 06:27:27.345339 | orchestrator | 2026-01-30 06:27:27.345343 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-30 06:27:27.345346 | orchestrator | Friday 30 January 2026 06:26:51 +0000 (0:00:01.869) 0:38:44.689 ******** 2026-01-30 06:27:27.345350 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:27:27.345354 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-30 06:27:27.345358 | orchestrator | 2026-01-30 06:27:27.345362 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-30 06:27:27.345365 | orchestrator | Friday 30 January 2026 06:26:53 +0000 (0:00:02.915) 0:38:47.605 ******** 2026-01-30 06:27:27.345369 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-30 06:27:27.345384 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-30 06:27:27.345388 | orchestrator | 2026-01-30 06:27:27.345391 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-30 06:27:27.345395 | orchestrator | Friday 30 January 2026 06:26:58 +0000 (0:00:04.747) 0:38:52.353 ******** 2026-01-30 06:27:27.345399 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345403 | orchestrator | 2026-01-30 06:27:27.345406 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-30 06:27:27.345410 | orchestrator | Friday 30 January 2026 06:26:59 +0000 (0:00:01.245) 0:38:53.598 ******** 2026-01-30 06:27:27.345414 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345418 | orchestrator | 2026-01-30 06:27:27.345424 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-30 06:27:27.345431 | orchestrator | Friday 30 January 2026 06:27:01 +0000 (0:00:01.241) 0:38:54.839 ******** 2026-01-30 06:27:27.345435 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345439 | orchestrator | 2026-01-30 06:27:27.345443 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-01-30 06:27:27.345446 | orchestrator | Friday 30 January 2026 06:27:02 +0000 (0:00:01.705) 0:38:56.544 ******** 2026-01-30 06:27:27.345450 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345454 | orchestrator | 2026-01-30 06:27:27.345458 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-01-30 06:27:27.345461 | orchestrator | Friday 30 January 2026 06:27:04 +0000 (0:00:01.114) 0:38:57.659 ******** 2026-01-30 06:27:27.345465 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:27:27.345469 | orchestrator | 2026-01-30 06:27:27.345473 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-01-30 06:27:27.345476 | orchestrator | Friday 30 January 2026 06:27:05 +0000 (0:00:01.117) 0:38:58.777 ******** 2026-01-30 06:27:27.345480 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-01-30 06:27:27.345485 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-01-30 06:27:27.345490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:27:27.345494 | orchestrator | 2026-01-30 06:27:27.345499 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-01-30 06:27:27.345503 | orchestrator | 2026-01-30 06:27:27.345507 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:27:27.345511 | orchestrator | Friday 30 January 2026 06:27:13 +0000 (0:00:08.274) 0:39:07.052 ******** 2026-01-30 06:27:27.345516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-01-30 06:27:27.345520 | orchestrator | 2026-01-30 06:27:27.345524 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:27:27.345528 | orchestrator | Friday 30 January 2026 06:27:14 +0000 (0:00:01.137) 0:39:08.190 ******** 2026-01-30 06:27:27.345532 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345536 | orchestrator | 2026-01-30 06:27:27.345541 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:27:27.345545 | orchestrator | Friday 30 January 2026 06:27:16 +0000 (0:00:01.479) 0:39:09.669 ******** 2026-01-30 06:27:27.345549 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345553 | orchestrator | 2026-01-30 06:27:27.345557 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:27:27.345562 | orchestrator | Friday 30 January 2026 06:27:17 +0000 (0:00:01.168) 0:39:10.838 ******** 2026-01-30 06:27:27.345566 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345570 | orchestrator | 2026-01-30 06:27:27.345574 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:27:27.345579 | orchestrator | Friday 30 January 2026 06:27:18 +0000 (0:00:01.478) 0:39:12.316 ******** 2026-01-30 06:27:27.345583 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345587 | orchestrator | 2026-01-30 06:27:27.345591 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:27:27.345596 | orchestrator | Friday 30 January 2026 06:27:19 +0000 (0:00:01.120) 0:39:13.437 ******** 2026-01-30 06:27:27.345600 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345604 | orchestrator | 2026-01-30 06:27:27.345609 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:27:27.345613 | orchestrator | Friday 30 January 2026 06:27:20 +0000 (0:00:01.129) 0:39:14.567 ******** 2026-01-30 06:27:27.345617 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345622 | orchestrator | 2026-01-30 06:27:27.345642 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:27:27.345647 | orchestrator | Friday 30 January 2026 06:27:22 +0000 (0:00:01.134) 0:39:15.701 ******** 2026-01-30 06:27:27.345655 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:27.345659 | orchestrator | 2026-01-30 06:27:27.345663 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:27:27.345668 | orchestrator | Friday 30 January 2026 06:27:23 +0000 (0:00:01.183) 0:39:16.884 ******** 2026-01-30 06:27:27.345672 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345676 | orchestrator | 2026-01-30 06:27:27.345680 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:27:27.345685 | orchestrator | Friday 30 January 2026 06:27:24 +0000 (0:00:01.151) 0:39:18.035 ******** 2026-01-30 06:27:27.345689 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:27:27.345693 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:27:27.345698 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:27:27.345702 | orchestrator | 2026-01-30 06:27:27.345706 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:27:27.345711 | orchestrator | Friday 30 January 2026 06:27:26 +0000 (0:00:01.673) 0:39:19.709 ******** 2026-01-30 06:27:27.345715 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:27.345719 | orchestrator | 2026-01-30 06:27:27.345724 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:27:27.345731 | orchestrator | Friday 30 January 2026 06:27:27 +0000 (0:00:01.232) 0:39:20.942 ******** 2026-01-30 06:27:52.193113 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:27:52.193220 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:27:52.193232 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:27:52.193242 | orchestrator | 2026-01-30 06:27:52.193251 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:27:52.193274 | orchestrator | Friday 30 January 2026 06:27:30 +0000 (0:00:03.055) 0:39:23.997 ******** 2026-01-30 06:27:52.193284 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 06:27:52.193292 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 06:27:52.193300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 06:27:52.193309 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193316 | orchestrator | 2026-01-30 06:27:52.193323 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:27:52.193331 | orchestrator | Friday 30 January 2026 06:27:31 +0000 (0:00:01.404) 0:39:25.402 ******** 2026-01-30 06:27:52.193339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193357 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193365 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193372 | orchestrator | 2026-01-30 06:27:52.193379 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:27:52.193386 | orchestrator | Friday 30 January 2026 06:27:33 +0000 (0:00:01.593) 0:39:26.996 ******** 2026-01-30 06:27:52.193396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193435 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:52.193443 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193450 | orchestrator | 2026-01-30 06:27:52.193458 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:27:52.193464 | orchestrator | Friday 30 January 2026 06:27:34 +0000 (0:00:01.147) 0:39:28.143 ******** 2026-01-30 06:27:52.193473 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:27:27.884364', 'end': '2026-01-30 06:27:27.942340', 'delta': '0:00:00.057976', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:27:52.193507 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:27:28.570814', 'end': '2026-01-30 06:27:28.640699', 'delta': '0:00:00.069885', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:27:52.193518 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:27:29.177528', 'end': '2026-01-30 06:27:29.234676', 'delta': '0:00:00.057148', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:27:52.193526 | orchestrator | 2026-01-30 06:27:52.193534 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:27:52.193542 | orchestrator | Friday 30 January 2026 06:27:35 +0000 (0:00:01.228) 0:39:29.372 ******** 2026-01-30 06:27:52.193549 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:52.193566 | orchestrator | 2026-01-30 06:27:52.193574 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:27:52.193582 | orchestrator | Friday 30 January 2026 06:27:37 +0000 (0:00:01.244) 0:39:30.616 ******** 2026-01-30 06:27:52.193589 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193596 | orchestrator | 2026-01-30 06:27:52.193722 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:27:52.193732 | orchestrator | Friday 30 January 2026 06:27:38 +0000 (0:00:01.260) 0:39:31.877 ******** 2026-01-30 06:27:52.193741 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:52.193750 | orchestrator | 2026-01-30 06:27:52.193759 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:27:52.193768 | orchestrator | Friday 30 January 2026 06:27:39 +0000 (0:00:01.100) 0:39:32.977 ******** 2026-01-30 06:27:52.193778 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:27:52.193786 | orchestrator | 2026-01-30 06:27:52.193795 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:27:52.193803 | orchestrator | Friday 30 January 2026 06:27:41 +0000 (0:00:02.492) 0:39:35.470 ******** 2026-01-30 06:27:52.193811 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:52.193820 | orchestrator | 2026-01-30 06:27:52.193828 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:27:52.193836 | orchestrator | Friday 30 January 2026 06:27:43 +0000 (0:00:01.164) 0:39:36.634 ******** 2026-01-30 06:27:52.193845 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193852 | orchestrator | 2026-01-30 06:27:52.193861 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:27:52.193870 | orchestrator | Friday 30 January 2026 06:27:44 +0000 (0:00:01.119) 0:39:37.753 ******** 2026-01-30 06:27:52.193879 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193888 | orchestrator | 2026-01-30 06:27:52.193895 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:27:52.193903 | orchestrator | Friday 30 January 2026 06:27:45 +0000 (0:00:01.264) 0:39:39.018 ******** 2026-01-30 06:27:52.193913 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193921 | orchestrator | 2026-01-30 06:27:52.193929 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:27:52.193937 | orchestrator | Friday 30 January 2026 06:27:46 +0000 (0:00:01.123) 0:39:40.142 ******** 2026-01-30 06:27:52.193945 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.193954 | orchestrator | 2026-01-30 06:27:52.193962 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:27:52.193969 | orchestrator | Friday 30 January 2026 06:27:47 +0000 (0:00:01.103) 0:39:41.246 ******** 2026-01-30 06:27:52.193977 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:52.193984 | orchestrator | 2026-01-30 06:27:52.193992 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:27:52.194001 | orchestrator | Friday 30 January 2026 06:27:48 +0000 (0:00:01.131) 0:39:42.377 ******** 2026-01-30 06:27:52.194009 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.194067 | orchestrator | 2026-01-30 06:27:52.194078 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:27:52.194086 | orchestrator | Friday 30 January 2026 06:27:49 +0000 (0:00:01.102) 0:39:43.480 ******** 2026-01-30 06:27:52.194094 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:52.194101 | orchestrator | 2026-01-30 06:27:52.194108 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:27:52.194115 | orchestrator | Friday 30 January 2026 06:27:51 +0000 (0:00:01.205) 0:39:44.685 ******** 2026-01-30 06:27:52.194123 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:52.194130 | orchestrator | 2026-01-30 06:27:52.194150 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:27:53.576099 | orchestrator | Friday 30 January 2026 06:27:52 +0000 (0:00:01.105) 0:39:45.791 ******** 2026-01-30 06:27:53.576278 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:27:53.576308 | orchestrator | 2026-01-30 06:27:53.576329 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:27:53.576345 | orchestrator | Friday 30 January 2026 06:27:53 +0000 (0:00:01.170) 0:39:46.962 ******** 2026-01-30 06:27:53.576421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}})  2026-01-30 06:27:53.576473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:27:53.576493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}})  2026-01-30 06:27:53.576514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:27:53.576767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}})  2026-01-30 06:27:53.576854 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}})  2026-01-30 06:27:53.576874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:53.576925 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:27:54.909508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:54.909585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:27:54.909595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:27:54.909604 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:27:54.909612 | orchestrator | 2026-01-30 06:27:54.909638 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:27:54.909651 | orchestrator | Friday 30 January 2026 06:27:54 +0000 (0:00:01.327) 0:39:48.289 ******** 2026-01-30 06:27:54.909658 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909723 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909739 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:27:54.909772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.353917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354069 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354131 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354196 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354209 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:28:00.354242 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:00.354251 | orchestrator | 2026-01-30 06:28:00.354259 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:28:00.354268 | orchestrator | Friday 30 January 2026 06:27:56 +0000 (0:00:01.394) 0:39:49.683 ******** 2026-01-30 06:28:00.354275 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:00.354282 | orchestrator | 2026-01-30 06:28:00.354290 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:28:00.354297 | orchestrator | Friday 30 January 2026 06:27:57 +0000 (0:00:01.618) 0:39:51.302 ******** 2026-01-30 06:28:00.354304 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:00.354311 | orchestrator | 2026-01-30 06:28:00.354318 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:28:00.354325 | orchestrator | Friday 30 January 2026 06:27:58 +0000 (0:00:01.148) 0:39:52.451 ******** 2026-01-30 06:28:00.354332 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:00.354339 | orchestrator | 2026-01-30 06:28:00.354345 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:28:00.354358 | orchestrator | Friday 30 January 2026 06:28:00 +0000 (0:00:01.504) 0:39:53.955 ******** 2026-01-30 06:28:41.917889 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918080 | orchestrator | 2026-01-30 06:28:41.918102 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:28:41.918113 | orchestrator | Friday 30 January 2026 06:28:01 +0000 (0:00:01.130) 0:39:55.086 ******** 2026-01-30 06:28:41.918121 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918130 | orchestrator | 2026-01-30 06:28:41.918138 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:28:41.918146 | orchestrator | Friday 30 January 2026 06:28:02 +0000 (0:00:01.258) 0:39:56.345 ******** 2026-01-30 06:28:41.918154 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918163 | orchestrator | 2026-01-30 06:28:41.918177 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:28:41.918210 | orchestrator | Friday 30 January 2026 06:28:03 +0000 (0:00:01.185) 0:39:57.531 ******** 2026-01-30 06:28:41.918219 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 06:28:41.918227 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 06:28:41.918235 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 06:28:41.918243 | orchestrator | 2026-01-30 06:28:41.918251 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:28:41.918259 | orchestrator | Friday 30 January 2026 06:28:05 +0000 (0:00:01.674) 0:39:59.205 ******** 2026-01-30 06:28:41.918267 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 06:28:41.918275 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 06:28:41.918283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 06:28:41.918290 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918298 | orchestrator | 2026-01-30 06:28:41.918306 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:28:41.918314 | orchestrator | Friday 30 January 2026 06:28:06 +0000 (0:00:01.147) 0:40:00.353 ******** 2026-01-30 06:28:41.918322 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-01-30 06:28:41.918330 | orchestrator | 2026-01-30 06:28:41.918340 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:28:41.918349 | orchestrator | Friday 30 January 2026 06:28:07 +0000 (0:00:01.177) 0:40:01.530 ******** 2026-01-30 06:28:41.918357 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918365 | orchestrator | 2026-01-30 06:28:41.918373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:28:41.918381 | orchestrator | Friday 30 January 2026 06:28:09 +0000 (0:00:01.148) 0:40:02.679 ******** 2026-01-30 06:28:41.918389 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918396 | orchestrator | 2026-01-30 06:28:41.918405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:28:41.918419 | orchestrator | Friday 30 January 2026 06:28:10 +0000 (0:00:01.140) 0:40:03.819 ******** 2026-01-30 06:28:41.918432 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918445 | orchestrator | 2026-01-30 06:28:41.918458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:28:41.918472 | orchestrator | Friday 30 January 2026 06:28:11 +0000 (0:00:01.139) 0:40:04.959 ******** 2026-01-30 06:28:41.918484 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.918498 | orchestrator | 2026-01-30 06:28:41.918512 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:28:41.918527 | orchestrator | Friday 30 January 2026 06:28:12 +0000 (0:00:01.264) 0:40:06.223 ******** 2026-01-30 06:28:41.918541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:28:41.918553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:28:41.918562 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:28:41.918572 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918582 | orchestrator | 2026-01-30 06:28:41.918624 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:28:41.918640 | orchestrator | Friday 30 January 2026 06:28:14 +0000 (0:00:01.402) 0:40:07.626 ******** 2026-01-30 06:28:41.918653 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:28:41.918666 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:28:41.918680 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:28:41.918689 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918697 | orchestrator | 2026-01-30 06:28:41.918705 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:28:41.918713 | orchestrator | Friday 30 January 2026 06:28:15 +0000 (0:00:01.489) 0:40:09.115 ******** 2026-01-30 06:28:41.918729 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:28:41.918738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:28:41.918752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:28:41.918765 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.918778 | orchestrator | 2026-01-30 06:28:41.918791 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:28:41.918803 | orchestrator | Friday 30 January 2026 06:28:16 +0000 (0:00:01.368) 0:40:10.484 ******** 2026-01-30 06:28:41.918817 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.918830 | orchestrator | 2026-01-30 06:28:41.918844 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:28:41.918857 | orchestrator | Friday 30 January 2026 06:28:18 +0000 (0:00:01.223) 0:40:11.708 ******** 2026-01-30 06:28:41.918870 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:28:41.918884 | orchestrator | 2026-01-30 06:28:41.918897 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:28:41.918911 | orchestrator | Friday 30 January 2026 06:28:19 +0000 (0:00:01.383) 0:40:13.092 ******** 2026-01-30 06:28:41.918945 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:28:41.918959 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:28:41.918973 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:28:41.918981 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:28:41.918989 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-01-30 06:28:41.918997 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:28:41.919005 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:28:41.919013 | orchestrator | 2026-01-30 06:28:41.919025 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:28:41.919038 | orchestrator | Friday 30 January 2026 06:28:21 +0000 (0:00:01.875) 0:40:14.967 ******** 2026-01-30 06:28:41.919052 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:28:41.919064 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:28:41.919076 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:28:41.919090 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:28:41.919103 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-01-30 06:28:41.919117 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:28:41.919130 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:28:41.919143 | orchestrator | 2026-01-30 06:28:41.919157 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-01-30 06:28:41.919165 | orchestrator | Friday 30 January 2026 06:28:23 +0000 (0:00:02.273) 0:40:17.241 ******** 2026-01-30 06:28:41.919173 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919181 | orchestrator | 2026-01-30 06:28:41.919189 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-01-30 06:28:41.919197 | orchestrator | Friday 30 January 2026 06:28:24 +0000 (0:00:01.130) 0:40:18.372 ******** 2026-01-30 06:28:41.919204 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919212 | orchestrator | 2026-01-30 06:28:41.919220 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-01-30 06:28:41.919228 | orchestrator | Friday 30 January 2026 06:28:25 +0000 (0:00:00.797) 0:40:19.170 ******** 2026-01-30 06:28:41.919236 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919251 | orchestrator | 2026-01-30 06:28:41.919259 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-01-30 06:28:41.919266 | orchestrator | Friday 30 January 2026 06:28:26 +0000 (0:00:00.877) 0:40:20.047 ******** 2026-01-30 06:28:41.919274 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-30 06:28:41.919282 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-30 06:28:41.919290 | orchestrator | 2026-01-30 06:28:41.919298 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:28:41.919306 | orchestrator | Friday 30 January 2026 06:28:30 +0000 (0:00:03.930) 0:40:23.977 ******** 2026-01-30 06:28:41.919314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-01-30 06:28:41.919322 | orchestrator | 2026-01-30 06:28:41.919330 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:28:41.919338 | orchestrator | Friday 30 January 2026 06:28:31 +0000 (0:00:01.209) 0:40:25.186 ******** 2026-01-30 06:28:41.919346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-01-30 06:28:41.919354 | orchestrator | 2026-01-30 06:28:41.919367 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:28:41.919375 | orchestrator | Friday 30 January 2026 06:28:32 +0000 (0:00:01.117) 0:40:26.303 ******** 2026-01-30 06:28:41.919383 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.919391 | orchestrator | 2026-01-30 06:28:41.919399 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:28:41.919406 | orchestrator | Friday 30 January 2026 06:28:33 +0000 (0:00:01.139) 0:40:27.443 ******** 2026-01-30 06:28:41.919414 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919422 | orchestrator | 2026-01-30 06:28:41.919430 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:28:41.919438 | orchestrator | Friday 30 January 2026 06:28:35 +0000 (0:00:01.575) 0:40:29.019 ******** 2026-01-30 06:28:41.919446 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919454 | orchestrator | 2026-01-30 06:28:41.919461 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:28:41.919469 | orchestrator | Friday 30 January 2026 06:28:36 +0000 (0:00:01.539) 0:40:30.559 ******** 2026-01-30 06:28:41.919477 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:28:41.919485 | orchestrator | 2026-01-30 06:28:41.919493 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:28:41.919501 | orchestrator | Friday 30 January 2026 06:28:38 +0000 (0:00:01.583) 0:40:32.142 ******** 2026-01-30 06:28:41.919508 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.919516 | orchestrator | 2026-01-30 06:28:41.919524 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:28:41.919532 | orchestrator | Friday 30 January 2026 06:28:39 +0000 (0:00:01.113) 0:40:33.256 ******** 2026-01-30 06:28:41.919540 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.919548 | orchestrator | 2026-01-30 06:28:41.919555 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:28:41.919563 | orchestrator | Friday 30 January 2026 06:28:40 +0000 (0:00:01.140) 0:40:34.397 ******** 2026-01-30 06:28:41.919571 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:28:41.919579 | orchestrator | 2026-01-30 06:28:41.919594 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:29:22.193387 | orchestrator | Friday 30 January 2026 06:28:41 +0000 (0:00:01.113) 0:40:35.510 ******** 2026-01-30 06:29:22.193511 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.193527 | orchestrator | 2026-01-30 06:29:22.193540 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:29:22.193551 | orchestrator | Friday 30 January 2026 06:28:43 +0000 (0:00:01.558) 0:40:37.069 ******** 2026-01-30 06:29:22.193562 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.193573 | orchestrator | 2026-01-30 06:29:22.193585 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:29:22.193691 | orchestrator | Friday 30 January 2026 06:28:44 +0000 (0:00:01.541) 0:40:38.610 ******** 2026-01-30 06:29:22.193706 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.193718 | orchestrator | 2026-01-30 06:29:22.193729 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:29:22.193740 | orchestrator | Friday 30 January 2026 06:28:45 +0000 (0:00:00.796) 0:40:39.407 ******** 2026-01-30 06:29:22.193750 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.193762 | orchestrator | 2026-01-30 06:29:22.193781 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:29:22.193800 | orchestrator | Friday 30 January 2026 06:28:46 +0000 (0:00:00.758) 0:40:40.166 ******** 2026-01-30 06:29:22.193821 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.193842 | orchestrator | 2026-01-30 06:29:22.193861 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:29:22.193880 | orchestrator | Friday 30 January 2026 06:28:47 +0000 (0:00:00.774) 0:40:40.940 ******** 2026-01-30 06:29:22.193898 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.193916 | orchestrator | 2026-01-30 06:29:22.193935 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:29:22.193953 | orchestrator | Friday 30 January 2026 06:28:48 +0000 (0:00:00.801) 0:40:41.741 ******** 2026-01-30 06:29:22.193972 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.193991 | orchestrator | 2026-01-30 06:29:22.194008 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:29:22.194093 | orchestrator | Friday 30 January 2026 06:28:48 +0000 (0:00:00.768) 0:40:42.510 ******** 2026-01-30 06:29:22.194104 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194115 | orchestrator | 2026-01-30 06:29:22.194126 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:29:22.194162 | orchestrator | Friday 30 January 2026 06:28:49 +0000 (0:00:00.772) 0:40:43.283 ******** 2026-01-30 06:29:22.194174 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194184 | orchestrator | 2026-01-30 06:29:22.194195 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:29:22.194206 | orchestrator | Friday 30 January 2026 06:28:50 +0000 (0:00:00.761) 0:40:44.045 ******** 2026-01-30 06:29:22.194217 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194227 | orchestrator | 2026-01-30 06:29:22.194238 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:29:22.194249 | orchestrator | Friday 30 January 2026 06:28:51 +0000 (0:00:00.762) 0:40:44.807 ******** 2026-01-30 06:29:22.194259 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.194270 | orchestrator | 2026-01-30 06:29:22.194281 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:29:22.194291 | orchestrator | Friday 30 January 2026 06:28:51 +0000 (0:00:00.766) 0:40:45.574 ******** 2026-01-30 06:29:22.194302 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.194312 | orchestrator | 2026-01-30 06:29:22.194323 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:29:22.194334 | orchestrator | Friday 30 January 2026 06:28:52 +0000 (0:00:00.789) 0:40:46.363 ******** 2026-01-30 06:29:22.194345 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194356 | orchestrator | 2026-01-30 06:29:22.194366 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:29:22.194377 | orchestrator | Friday 30 January 2026 06:28:53 +0000 (0:00:00.796) 0:40:47.160 ******** 2026-01-30 06:29:22.194403 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194414 | orchestrator | 2026-01-30 06:29:22.194425 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:29:22.194436 | orchestrator | Friday 30 January 2026 06:28:54 +0000 (0:00:00.771) 0:40:47.931 ******** 2026-01-30 06:29:22.194446 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194457 | orchestrator | 2026-01-30 06:29:22.194467 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:29:22.194489 | orchestrator | Friday 30 January 2026 06:28:55 +0000 (0:00:00.919) 0:40:48.851 ******** 2026-01-30 06:29:22.194500 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194511 | orchestrator | 2026-01-30 06:29:22.194521 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:29:22.194532 | orchestrator | Friday 30 January 2026 06:28:56 +0000 (0:00:00.769) 0:40:49.620 ******** 2026-01-30 06:29:22.194543 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194553 | orchestrator | 2026-01-30 06:29:22.194564 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:29:22.194574 | orchestrator | Friday 30 January 2026 06:28:56 +0000 (0:00:00.771) 0:40:50.392 ******** 2026-01-30 06:29:22.194585 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194649 | orchestrator | 2026-01-30 06:29:22.194662 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:29:22.194672 | orchestrator | Friday 30 January 2026 06:28:57 +0000 (0:00:00.770) 0:40:51.162 ******** 2026-01-30 06:29:22.194683 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194693 | orchestrator | 2026-01-30 06:29:22.194704 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:29:22.194716 | orchestrator | Friday 30 January 2026 06:28:58 +0000 (0:00:00.771) 0:40:51.933 ******** 2026-01-30 06:29:22.194726 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194737 | orchestrator | 2026-01-30 06:29:22.194748 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:29:22.194758 | orchestrator | Friday 30 January 2026 06:28:59 +0000 (0:00:00.760) 0:40:52.694 ******** 2026-01-30 06:29:22.194791 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194803 | orchestrator | 2026-01-30 06:29:22.194814 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:29:22.194825 | orchestrator | Friday 30 January 2026 06:28:59 +0000 (0:00:00.743) 0:40:53.437 ******** 2026-01-30 06:29:22.194836 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194846 | orchestrator | 2026-01-30 06:29:22.194857 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:29:22.194867 | orchestrator | Friday 30 January 2026 06:29:00 +0000 (0:00:00.779) 0:40:54.217 ******** 2026-01-30 06:29:22.194878 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194889 | orchestrator | 2026-01-30 06:29:22.194899 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:29:22.194910 | orchestrator | Friday 30 January 2026 06:29:01 +0000 (0:00:00.755) 0:40:54.973 ******** 2026-01-30 06:29:22.194921 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.194931 | orchestrator | 2026-01-30 06:29:22.194942 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:29:22.194953 | orchestrator | Friday 30 January 2026 06:29:02 +0000 (0:00:00.768) 0:40:55.741 ******** 2026-01-30 06:29:22.194963 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.194974 | orchestrator | 2026-01-30 06:29:22.194985 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:29:22.194995 | orchestrator | Friday 30 January 2026 06:29:03 +0000 (0:00:01.606) 0:40:57.348 ******** 2026-01-30 06:29:22.195006 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.195016 | orchestrator | 2026-01-30 06:29:22.195027 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:29:22.195038 | orchestrator | Friday 30 January 2026 06:29:05 +0000 (0:00:02.005) 0:40:59.354 ******** 2026-01-30 06:29:22.195048 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-01-30 06:29:22.195061 | orchestrator | 2026-01-30 06:29:22.195071 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:29:22.195082 | orchestrator | Friday 30 January 2026 06:29:07 +0000 (0:00:01.295) 0:41:00.650 ******** 2026-01-30 06:29:22.195092 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195103 | orchestrator | 2026-01-30 06:29:22.195123 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:29:22.195142 | orchestrator | Friday 30 January 2026 06:29:08 +0000 (0:00:01.139) 0:41:01.790 ******** 2026-01-30 06:29:22.195161 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195178 | orchestrator | 2026-01-30 06:29:22.195195 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:29:22.195213 | orchestrator | Friday 30 January 2026 06:29:09 +0000 (0:00:01.141) 0:41:02.931 ******** 2026-01-30 06:29:22.195231 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:29:22.195247 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:29:22.195285 | orchestrator | 2026-01-30 06:29:22.195306 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:29:22.195326 | orchestrator | Friday 30 January 2026 06:29:11 +0000 (0:00:01.941) 0:41:04.873 ******** 2026-01-30 06:29:22.195345 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.195363 | orchestrator | 2026-01-30 06:29:22.195380 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:29:22.195392 | orchestrator | Friday 30 January 2026 06:29:12 +0000 (0:00:01.553) 0:41:06.427 ******** 2026-01-30 06:29:22.195402 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195413 | orchestrator | 2026-01-30 06:29:22.195424 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:29:22.195434 | orchestrator | Friday 30 January 2026 06:29:14 +0000 (0:00:01.197) 0:41:07.624 ******** 2026-01-30 06:29:22.195445 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195456 | orchestrator | 2026-01-30 06:29:22.195473 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:29:22.195485 | orchestrator | Friday 30 January 2026 06:29:14 +0000 (0:00:00.824) 0:41:08.449 ******** 2026-01-30 06:29:22.195495 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195506 | orchestrator | 2026-01-30 06:29:22.195517 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:29:22.195527 | orchestrator | Friday 30 January 2026 06:29:15 +0000 (0:00:00.762) 0:41:09.211 ******** 2026-01-30 06:29:22.195538 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-01-30 06:29:22.195549 | orchestrator | 2026-01-30 06:29:22.195560 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:29:22.195570 | orchestrator | Friday 30 January 2026 06:29:16 +0000 (0:00:01.132) 0:41:10.344 ******** 2026-01-30 06:29:22.195581 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:29:22.195591 | orchestrator | 2026-01-30 06:29:22.195622 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:29:22.195633 | orchestrator | Friday 30 January 2026 06:29:18 +0000 (0:00:01.864) 0:41:12.209 ******** 2026-01-30 06:29:22.195644 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:29:22.195655 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:29:22.195665 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:29:22.195675 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195686 | orchestrator | 2026-01-30 06:29:22.195696 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:29:22.195707 | orchestrator | Friday 30 January 2026 06:29:19 +0000 (0:00:01.149) 0:41:13.359 ******** 2026-01-30 06:29:22.195717 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:29:22.195728 | orchestrator | 2026-01-30 06:29:22.195738 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:29:22.195749 | orchestrator | Friday 30 January 2026 06:29:20 +0000 (0:00:01.135) 0:41:14.494 ******** 2026-01-30 06:29:22.195770 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444620 | orchestrator | 2026-01-30 06:30:05.444720 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:30:05.444752 | orchestrator | Friday 30 January 2026 06:29:22 +0000 (0:00:01.294) 0:41:15.789 ******** 2026-01-30 06:30:05.444758 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444766 | orchestrator | 2026-01-30 06:30:05.444772 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:30:05.444778 | orchestrator | Friday 30 January 2026 06:29:23 +0000 (0:00:01.175) 0:41:16.966 ******** 2026-01-30 06:30:05.444784 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444790 | orchestrator | 2026-01-30 06:30:05.444796 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:30:05.444802 | orchestrator | Friday 30 January 2026 06:29:24 +0000 (0:00:01.166) 0:41:18.133 ******** 2026-01-30 06:30:05.444808 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444814 | orchestrator | 2026-01-30 06:30:05.444820 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:30:05.444825 | orchestrator | Friday 30 January 2026 06:29:25 +0000 (0:00:00.793) 0:41:18.926 ******** 2026-01-30 06:30:05.444831 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:05.444838 | orchestrator | 2026-01-30 06:30:05.444844 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:30:05.444850 | orchestrator | Friday 30 January 2026 06:29:27 +0000 (0:00:02.249) 0:41:21.176 ******** 2026-01-30 06:30:05.444856 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:05.444862 | orchestrator | 2026-01-30 06:30:05.444868 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:30:05.444873 | orchestrator | Friday 30 January 2026 06:29:28 +0000 (0:00:00.815) 0:41:21.991 ******** 2026-01-30 06:30:05.444879 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-01-30 06:30:05.444885 | orchestrator | 2026-01-30 06:30:05.444891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:30:05.444897 | orchestrator | Friday 30 January 2026 06:29:29 +0000 (0:00:01.106) 0:41:23.098 ******** 2026-01-30 06:30:05.444902 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444908 | orchestrator | 2026-01-30 06:30:05.444914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:30:05.444920 | orchestrator | Friday 30 January 2026 06:29:30 +0000 (0:00:01.165) 0:41:24.264 ******** 2026-01-30 06:30:05.444925 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444931 | orchestrator | 2026-01-30 06:30:05.444937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:30:05.444943 | orchestrator | Friday 30 January 2026 06:29:31 +0000 (0:00:01.133) 0:41:25.397 ******** 2026-01-30 06:30:05.444948 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444954 | orchestrator | 2026-01-30 06:30:05.444960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:30:05.444965 | orchestrator | Friday 30 January 2026 06:29:32 +0000 (0:00:01.140) 0:41:26.538 ******** 2026-01-30 06:30:05.444971 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.444977 | orchestrator | 2026-01-30 06:30:05.444983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:30:05.444988 | orchestrator | Friday 30 January 2026 06:29:34 +0000 (0:00:01.161) 0:41:27.700 ******** 2026-01-30 06:30:05.444994 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445000 | orchestrator | 2026-01-30 06:30:05.445005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:30:05.445011 | orchestrator | Friday 30 January 2026 06:29:35 +0000 (0:00:01.128) 0:41:28.828 ******** 2026-01-30 06:30:05.445017 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445023 | orchestrator | 2026-01-30 06:30:05.445028 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:30:05.445034 | orchestrator | Friday 30 January 2026 06:29:36 +0000 (0:00:01.209) 0:41:30.038 ******** 2026-01-30 06:30:05.445050 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445061 | orchestrator | 2026-01-30 06:30:05.445067 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:30:05.445073 | orchestrator | Friday 30 January 2026 06:29:37 +0000 (0:00:01.169) 0:41:31.207 ******** 2026-01-30 06:30:05.445078 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445084 | orchestrator | 2026-01-30 06:30:05.445090 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:30:05.445096 | orchestrator | Friday 30 January 2026 06:29:38 +0000 (0:00:01.185) 0:41:32.393 ******** 2026-01-30 06:30:05.445102 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:05.445107 | orchestrator | 2026-01-30 06:30:05.445113 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:30:05.445119 | orchestrator | Friday 30 January 2026 06:29:39 +0000 (0:00:00.792) 0:41:33.185 ******** 2026-01-30 06:30:05.445125 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-01-30 06:30:05.445131 | orchestrator | 2026-01-30 06:30:05.445137 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:30:05.445143 | orchestrator | Friday 30 January 2026 06:29:40 +0000 (0:00:01.145) 0:41:34.331 ******** 2026-01-30 06:30:05.445150 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-01-30 06:30:05.445158 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-30 06:30:05.445165 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-30 06:30:05.445171 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-30 06:30:05.445177 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-30 06:30:05.445184 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-30 06:30:05.445191 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-30 06:30:05.445198 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:30:05.445206 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:30:05.445226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:30:05.445234 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:30:05.445241 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:30:05.445249 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:30:05.445256 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:30:05.445264 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-01-30 06:30:05.445271 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-01-30 06:30:05.445278 | orchestrator | 2026-01-30 06:30:05.445285 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:30:05.445292 | orchestrator | Friday 30 January 2026 06:29:47 +0000 (0:00:06.578) 0:41:40.909 ******** 2026-01-30 06:30:05.445299 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-01-30 06:30:05.445306 | orchestrator | 2026-01-30 06:30:05.445313 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:30:05.445320 | orchestrator | Friday 30 January 2026 06:29:48 +0000 (0:00:01.121) 0:41:42.031 ******** 2026-01-30 06:30:05.445328 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:30:05.445335 | orchestrator | 2026-01-30 06:30:05.445343 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:30:05.445350 | orchestrator | Friday 30 January 2026 06:29:49 +0000 (0:00:01.504) 0:41:43.536 ******** 2026-01-30 06:30:05.445357 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:30:05.445365 | orchestrator | 2026-01-30 06:30:05.445372 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:30:05.445384 | orchestrator | Friday 30 January 2026 06:29:51 +0000 (0:00:01.670) 0:41:45.206 ******** 2026-01-30 06:30:05.445391 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445398 | orchestrator | 2026-01-30 06:30:05.445405 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:30:05.445413 | orchestrator | Friday 30 January 2026 06:29:52 +0000 (0:00:00.773) 0:41:45.980 ******** 2026-01-30 06:30:05.445420 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445426 | orchestrator | 2026-01-30 06:30:05.445432 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:30:05.445438 | orchestrator | Friday 30 January 2026 06:29:53 +0000 (0:00:00.819) 0:41:46.799 ******** 2026-01-30 06:30:05.445445 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445451 | orchestrator | 2026-01-30 06:30:05.445457 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:30:05.445463 | orchestrator | Friday 30 January 2026 06:29:53 +0000 (0:00:00.794) 0:41:47.594 ******** 2026-01-30 06:30:05.445469 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445475 | orchestrator | 2026-01-30 06:30:05.445481 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:30:05.445488 | orchestrator | Friday 30 January 2026 06:29:54 +0000 (0:00:00.782) 0:41:48.377 ******** 2026-01-30 06:30:05.445494 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445500 | orchestrator | 2026-01-30 06:30:05.445506 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:30:05.445512 | orchestrator | Friday 30 January 2026 06:29:55 +0000 (0:00:00.822) 0:41:49.199 ******** 2026-01-30 06:30:05.445519 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445525 | orchestrator | 2026-01-30 06:30:05.445531 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:30:05.445541 | orchestrator | Friday 30 January 2026 06:29:56 +0000 (0:00:00.773) 0:41:49.973 ******** 2026-01-30 06:30:05.445547 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445554 | orchestrator | 2026-01-30 06:30:05.445560 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:30:05.445566 | orchestrator | Friday 30 January 2026 06:29:57 +0000 (0:00:00.768) 0:41:50.742 ******** 2026-01-30 06:30:05.445572 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445578 | orchestrator | 2026-01-30 06:30:05.445601 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:30:05.445608 | orchestrator | Friday 30 January 2026 06:29:57 +0000 (0:00:00.784) 0:41:51.526 ******** 2026-01-30 06:30:05.445614 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445620 | orchestrator | 2026-01-30 06:30:05.445626 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:30:05.445632 | orchestrator | Friday 30 January 2026 06:29:58 +0000 (0:00:00.778) 0:41:52.305 ******** 2026-01-30 06:30:05.445638 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:05.445645 | orchestrator | 2026-01-30 06:30:05.445651 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:30:05.445657 | orchestrator | Friday 30 January 2026 06:29:59 +0000 (0:00:00.771) 0:41:53.076 ******** 2026-01-30 06:30:05.445663 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:05.445669 | orchestrator | 2026-01-30 06:30:05.445675 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:30:05.445681 | orchestrator | Friday 30 January 2026 06:30:00 +0000 (0:00:00.853) 0:41:53.930 ******** 2026-01-30 06:30:05.445687 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:30:05.445694 | orchestrator | 2026-01-30 06:30:05.445700 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:30:05.445706 | orchestrator | Friday 30 January 2026 06:30:04 +0000 (0:00:04.280) 0:41:58.211 ******** 2026-01-30 06:30:05.445716 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:30:47.350481 | orchestrator | 2026-01-30 06:30:47.350635 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:30:47.350651 | orchestrator | Friday 30 January 2026 06:30:05 +0000 (0:00:00.832) 0:41:59.044 ******** 2026-01-30 06:30:47.350661 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-01-30 06:30:47.350671 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-01-30 06:30:47.350680 | orchestrator | 2026-01-30 06:30:47.350687 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:30:47.350693 | orchestrator | Friday 30 January 2026 06:30:13 +0000 (0:00:07.872) 0:42:06.917 ******** 2026-01-30 06:30:47.350700 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350708 | orchestrator | 2026-01-30 06:30:47.350715 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:30:47.350722 | orchestrator | Friday 30 January 2026 06:30:14 +0000 (0:00:00.835) 0:42:07.752 ******** 2026-01-30 06:30:47.350729 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350736 | orchestrator | 2026-01-30 06:30:47.350743 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:30:47.350806 | orchestrator | Friday 30 January 2026 06:30:14 +0000 (0:00:00.782) 0:42:08.535 ******** 2026-01-30 06:30:47.350814 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350820 | orchestrator | 2026-01-30 06:30:47.350826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:30:47.350832 | orchestrator | Friday 30 January 2026 06:30:15 +0000 (0:00:00.830) 0:42:09.366 ******** 2026-01-30 06:30:47.350839 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350845 | orchestrator | 2026-01-30 06:30:47.350851 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:30:47.350857 | orchestrator | Friday 30 January 2026 06:30:16 +0000 (0:00:00.775) 0:42:10.142 ******** 2026-01-30 06:30:47.350889 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350897 | orchestrator | 2026-01-30 06:30:47.350904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:30:47.350911 | orchestrator | Friday 30 January 2026 06:30:17 +0000 (0:00:00.799) 0:42:10.941 ******** 2026-01-30 06:30:47.350918 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.350926 | orchestrator | 2026-01-30 06:30:47.350933 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:30:47.350940 | orchestrator | Friday 30 January 2026 06:30:18 +0000 (0:00:00.902) 0:42:11.844 ******** 2026-01-30 06:30:47.350947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:30:47.350954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:30:47.350960 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:30:47.350987 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.350994 | orchestrator | 2026-01-30 06:30:47.351000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:30:47.351023 | orchestrator | Friday 30 January 2026 06:30:19 +0000 (0:00:01.104) 0:42:12.948 ******** 2026-01-30 06:30:47.351030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:30:47.351036 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:30:47.351062 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:30:47.351069 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351075 | orchestrator | 2026-01-30 06:30:47.351082 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:30:47.351088 | orchestrator | Friday 30 January 2026 06:30:20 +0000 (0:00:01.106) 0:42:14.055 ******** 2026-01-30 06:30:47.351094 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:30:47.351100 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:30:47.351107 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:30:47.351113 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351120 | orchestrator | 2026-01-30 06:30:47.351127 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:30:47.351133 | orchestrator | Friday 30 January 2026 06:30:21 +0000 (0:00:01.044) 0:42:15.100 ******** 2026-01-30 06:30:47.351140 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351146 | orchestrator | 2026-01-30 06:30:47.351153 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:30:47.351159 | orchestrator | Friday 30 January 2026 06:30:22 +0000 (0:00:00.823) 0:42:15.924 ******** 2026-01-30 06:30:47.351165 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:30:47.351172 | orchestrator | 2026-01-30 06:30:47.351178 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:30:47.351185 | orchestrator | Friday 30 January 2026 06:30:23 +0000 (0:00:01.041) 0:42:16.966 ******** 2026-01-30 06:30:47.351191 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351197 | orchestrator | 2026-01-30 06:30:47.351204 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-30 06:30:47.351211 | orchestrator | Friday 30 January 2026 06:30:24 +0000 (0:00:01.539) 0:42:18.506 ******** 2026-01-30 06:30:47.351217 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351224 | orchestrator | 2026-01-30 06:30:47.351245 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:30:47.351252 | orchestrator | Friday 30 January 2026 06:30:25 +0000 (0:00:00.897) 0:42:19.403 ******** 2026-01-30 06:30:47.351259 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:30:47.351266 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:30:47.351273 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:30:47.351279 | orchestrator | 2026-01-30 06:30:47.351286 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-30 06:30:47.351293 | orchestrator | Friday 30 January 2026 06:30:27 +0000 (0:00:01.355) 0:42:20.759 ******** 2026-01-30 06:30:47.351299 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-01-30 06:30:47.351306 | orchestrator | 2026-01-30 06:30:47.351312 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-30 06:30:47.351318 | orchestrator | Friday 30 January 2026 06:30:28 +0000 (0:00:01.130) 0:42:21.890 ******** 2026-01-30 06:30:47.351325 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351332 | orchestrator | 2026-01-30 06:30:47.351338 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-30 06:30:47.351345 | orchestrator | Friday 30 January 2026 06:30:29 +0000 (0:00:01.124) 0:42:23.014 ******** 2026-01-30 06:30:47.351351 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351358 | orchestrator | 2026-01-30 06:30:47.351364 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-30 06:30:47.351370 | orchestrator | Friday 30 January 2026 06:30:30 +0000 (0:00:01.152) 0:42:24.167 ******** 2026-01-30 06:30:47.351376 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351382 | orchestrator | 2026-01-30 06:30:47.351388 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-30 06:30:47.351395 | orchestrator | Friday 30 January 2026 06:30:32 +0000 (0:00:01.453) 0:42:25.620 ******** 2026-01-30 06:30:47.351416 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351445 | orchestrator | 2026-01-30 06:30:47.351452 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-30 06:30:47.351459 | orchestrator | Friday 30 January 2026 06:30:33 +0000 (0:00:01.146) 0:42:26.766 ******** 2026-01-30 06:30:47.351465 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 06:30:47.351471 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 06:30:47.351477 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 06:30:47.351484 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 06:30:47.351490 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 06:30:47.351514 | orchestrator | 2026-01-30 06:30:47.351521 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-30 06:30:47.351527 | orchestrator | Friday 30 January 2026 06:30:35 +0000 (0:00:02.592) 0:42:29.359 ******** 2026-01-30 06:30:47.351533 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351540 | orchestrator | 2026-01-30 06:30:47.351546 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-30 06:30:47.351552 | orchestrator | Friday 30 January 2026 06:30:36 +0000 (0:00:00.787) 0:42:30.146 ******** 2026-01-30 06:30:47.351558 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-01-30 06:30:47.351564 | orchestrator | 2026-01-30 06:30:47.351585 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-30 06:30:47.351596 | orchestrator | Friday 30 January 2026 06:30:37 +0000 (0:00:01.102) 0:42:31.249 ******** 2026-01-30 06:30:47.351603 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 06:30:47.351609 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-30 06:30:47.351615 | orchestrator | 2026-01-30 06:30:47.351621 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-30 06:30:47.351628 | orchestrator | Friday 30 January 2026 06:30:39 +0000 (0:00:01.866) 0:42:33.116 ******** 2026-01-30 06:30:47.351634 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:30:47.351641 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 06:30:47.351647 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:30:47.351653 | orchestrator | 2026-01-30 06:30:47.351659 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:30:47.351665 | orchestrator | Friday 30 January 2026 06:30:43 +0000 (0:00:03.722) 0:42:36.838 ******** 2026-01-30 06:30:47.351672 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-01-30 06:30:47.351678 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 06:30:47.351685 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:30:47.351691 | orchestrator | 2026-01-30 06:30:47.351697 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-30 06:30:47.351704 | orchestrator | Friday 30 January 2026 06:30:44 +0000 (0:00:01.684) 0:42:38.523 ******** 2026-01-30 06:30:47.351710 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351716 | orchestrator | 2026-01-30 06:30:47.351723 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-30 06:30:47.351729 | orchestrator | Friday 30 January 2026 06:30:45 +0000 (0:00:00.874) 0:42:39.397 ******** 2026-01-30 06:30:47.351735 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351741 | orchestrator | 2026-01-30 06:30:47.351766 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-30 06:30:47.351772 | orchestrator | Friday 30 January 2026 06:30:46 +0000 (0:00:00.775) 0:42:40.173 ******** 2026-01-30 06:30:47.351779 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:30:47.351785 | orchestrator | 2026-01-30 06:30:47.351802 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-30 06:31:55.456044 | orchestrator | Friday 30 January 2026 06:30:47 +0000 (0:00:00.774) 0:42:40.947 ******** 2026-01-30 06:31:55.456182 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-01-30 06:31:55.456200 | orchestrator | 2026-01-30 06:31:55.456213 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-30 06:31:55.456225 | orchestrator | Friday 30 January 2026 06:30:48 +0000 (0:00:01.112) 0:42:42.060 ******** 2026-01-30 06:31:55.456236 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:31:55.456248 | orchestrator | 2026-01-30 06:31:55.456260 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-30 06:31:55.456271 | orchestrator | Friday 30 January 2026 06:30:49 +0000 (0:00:01.466) 0:42:43.527 ******** 2026-01-30 06:31:55.456282 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:31:55.456292 | orchestrator | 2026-01-30 06:31:55.456303 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-30 06:31:55.456314 | orchestrator | Friday 30 January 2026 06:30:53 +0000 (0:00:03.394) 0:42:46.921 ******** 2026-01-30 06:31:55.456325 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-01-30 06:31:55.456336 | orchestrator | 2026-01-30 06:31:55.456346 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-30 06:31:55.456357 | orchestrator | Friday 30 January 2026 06:30:54 +0000 (0:00:01.104) 0:42:48.025 ******** 2026-01-30 06:31:55.456368 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:31:55.456379 | orchestrator | 2026-01-30 06:31:55.456389 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-30 06:31:55.456400 | orchestrator | Friday 30 January 2026 06:30:56 +0000 (0:00:01.990) 0:42:50.016 ******** 2026-01-30 06:31:55.456411 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:31:55.456421 | orchestrator | 2026-01-30 06:31:55.456432 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-30 06:31:55.456443 | orchestrator | Friday 30 January 2026 06:30:58 +0000 (0:00:01.963) 0:42:51.980 ******** 2026-01-30 06:31:55.456454 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:31:55.456466 | orchestrator | 2026-01-30 06:31:55.456476 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-30 06:31:55.456487 | orchestrator | Friday 30 January 2026 06:31:00 +0000 (0:00:02.418) 0:42:54.399 ******** 2026-01-30 06:31:55.456498 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.456510 | orchestrator | 2026-01-30 06:31:55.456520 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-30 06:31:55.456531 | orchestrator | Friday 30 January 2026 06:31:01 +0000 (0:00:01.112) 0:42:55.511 ******** 2026-01-30 06:31:55.456542 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.456555 | orchestrator | 2026-01-30 06:31:55.456606 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-30 06:31:55.456619 | orchestrator | Friday 30 January 2026 06:31:03 +0000 (0:00:01.180) 0:42:56.691 ******** 2026-01-30 06:31:55.456631 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-30 06:31:55.456644 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-30 06:31:55.456657 | orchestrator | 2026-01-30 06:31:55.456668 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-30 06:31:55.456679 | orchestrator | Friday 30 January 2026 06:31:04 +0000 (0:00:01.836) 0:42:58.527 ******** 2026-01-30 06:31:55.456689 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-01-30 06:31:55.456700 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-01-30 06:31:55.456711 | orchestrator | 2026-01-30 06:31:55.456722 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-30 06:31:55.456733 | orchestrator | Friday 30 January 2026 06:31:07 +0000 (0:00:03.085) 0:43:01.613 ******** 2026-01-30 06:31:55.456752 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-01-30 06:31:55.456770 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-01-30 06:31:55.456789 | orchestrator | 2026-01-30 06:31:55.456855 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-30 06:31:55.456876 | orchestrator | Friday 30 January 2026 06:31:12 +0000 (0:00:04.688) 0:43:06.301 ******** 2026-01-30 06:31:55.456895 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.456915 | orchestrator | 2026-01-30 06:31:55.456934 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-30 06:31:55.456952 | orchestrator | Friday 30 January 2026 06:31:13 +0000 (0:00:00.876) 0:43:07.177 ******** 2026-01-30 06:31:55.456970 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.456991 | orchestrator | 2026-01-30 06:31:55.457009 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-30 06:31:55.457027 | orchestrator | Friday 30 January 2026 06:31:14 +0000 (0:00:00.870) 0:43:08.048 ******** 2026-01-30 06:31:55.457047 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.457065 | orchestrator | 2026-01-30 06:31:55.457082 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-01-30 06:31:55.457099 | orchestrator | Friday 30 January 2026 06:31:15 +0000 (0:00:00.890) 0:43:08.938 ******** 2026-01-30 06:31:55.457118 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.457137 | orchestrator | 2026-01-30 06:31:55.457156 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-01-30 06:31:55.457172 | orchestrator | Friday 30 January 2026 06:31:16 +0000 (0:00:00.850) 0:43:09.789 ******** 2026-01-30 06:31:55.457190 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:31:55.457208 | orchestrator | 2026-01-30 06:31:55.457227 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-01-30 06:31:55.457244 | orchestrator | Friday 30 January 2026 06:31:16 +0000 (0:00:00.764) 0:43:10.553 ******** 2026-01-30 06:31:55.457264 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-01-30 06:31:55.457285 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-01-30 06:31:55.457303 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-01-30 06:31:55.457339 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-01-30 06:31:55.457351 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (596 retries left). 2026-01-30 06:31:55.457361 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:31:55.457372 | orchestrator | 2026-01-30 06:31:55.457383 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-01-30 06:31:55.457394 | orchestrator | 2026-01-30 06:31:55.457404 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:31:55.457415 | orchestrator | Friday 30 January 2026 06:31:34 +0000 (0:00:18.021) 0:43:28.574 ******** 2026-01-30 06:31:55.457425 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-01-30 06:31:55.457436 | orchestrator | 2026-01-30 06:31:55.457447 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:31:55.457457 | orchestrator | Friday 30 January 2026 06:31:36 +0000 (0:00:01.164) 0:43:29.739 ******** 2026-01-30 06:31:55.457468 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457479 | orchestrator | 2026-01-30 06:31:55.457489 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:31:55.457500 | orchestrator | Friday 30 January 2026 06:31:37 +0000 (0:00:01.439) 0:43:31.178 ******** 2026-01-30 06:31:55.457510 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457521 | orchestrator | 2026-01-30 06:31:55.457532 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:31:55.457542 | orchestrator | Friday 30 January 2026 06:31:38 +0000 (0:00:01.128) 0:43:32.307 ******** 2026-01-30 06:31:55.457553 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457615 | orchestrator | 2026-01-30 06:31:55.457641 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:31:55.457652 | orchestrator | Friday 30 January 2026 06:31:40 +0000 (0:00:01.457) 0:43:33.764 ******** 2026-01-30 06:31:55.457663 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457673 | orchestrator | 2026-01-30 06:31:55.457684 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:31:55.457694 | orchestrator | Friday 30 January 2026 06:31:41 +0000 (0:00:01.131) 0:43:34.896 ******** 2026-01-30 06:31:55.457705 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457715 | orchestrator | 2026-01-30 06:31:55.457726 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:31:55.457737 | orchestrator | Friday 30 January 2026 06:31:42 +0000 (0:00:01.108) 0:43:36.005 ******** 2026-01-30 06:31:55.457747 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457758 | orchestrator | 2026-01-30 06:31:55.457769 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:31:55.457779 | orchestrator | Friday 30 January 2026 06:31:43 +0000 (0:00:01.129) 0:43:37.135 ******** 2026-01-30 06:31:55.457790 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:31:55.457801 | orchestrator | 2026-01-30 06:31:55.457811 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:31:55.457822 | orchestrator | Friday 30 January 2026 06:31:44 +0000 (0:00:01.129) 0:43:38.265 ******** 2026-01-30 06:31:55.457832 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457843 | orchestrator | 2026-01-30 06:31:55.457854 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:31:55.457864 | orchestrator | Friday 30 January 2026 06:31:45 +0000 (0:00:01.134) 0:43:39.400 ******** 2026-01-30 06:31:55.457875 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:31:55.457885 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:31:55.457904 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:31:55.457915 | orchestrator | 2026-01-30 06:31:55.457925 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:31:55.457936 | orchestrator | Friday 30 January 2026 06:31:47 +0000 (0:00:02.026) 0:43:41.426 ******** 2026-01-30 06:31:55.457946 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:31:55.457957 | orchestrator | 2026-01-30 06:31:55.457968 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:31:55.457978 | orchestrator | Friday 30 January 2026 06:31:49 +0000 (0:00:01.300) 0:43:42.727 ******** 2026-01-30 06:31:55.457989 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:31:55.458000 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:31:55.458010 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:31:55.458083 | orchestrator | 2026-01-30 06:31:55.458103 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:31:55.458120 | orchestrator | Friday 30 January 2026 06:31:52 +0000 (0:00:03.251) 0:43:45.978 ******** 2026-01-30 06:31:55.458139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:31:55.458157 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:31:55.458193 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:31:55.458211 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:31:55.458228 | orchestrator | 2026-01-30 06:31:55.458262 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:31:55.458282 | orchestrator | Friday 30 January 2026 06:31:53 +0000 (0:00:01.397) 0:43:47.375 ******** 2026-01-30 06:31:55.458303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:31:55.458356 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:32:15.480918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:32:15.481025 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481038 | orchestrator | 2026-01-30 06:32:15.481046 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:32:15.481054 | orchestrator | Friday 30 January 2026 06:31:55 +0000 (0:00:01.680) 0:43:49.056 ******** 2026-01-30 06:32:15.481062 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:15.481072 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:15.481078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:15.481084 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481090 | orchestrator | 2026-01-30 06:32:15.481096 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:32:15.481102 | orchestrator | Friday 30 January 2026 06:31:56 +0000 (0:00:01.172) 0:43:50.229 ******** 2026-01-30 06:32:15.481126 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:31:49.986674', 'end': '2026-01-30 06:31:50.051794', 'delta': '0:00:00.065120', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:32:15.481137 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:31:50.552650', 'end': '2026-01-30 06:31:50.608180', 'delta': '0:00:00.055530', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:32:15.481177 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:31:51.143204', 'end': '2026-01-30 06:31:51.207286', 'delta': '0:00:00.064082', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:32:15.481185 | orchestrator | 2026-01-30 06:32:15.481191 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:32:15.481197 | orchestrator | Friday 30 January 2026 06:31:57 +0000 (0:00:01.191) 0:43:51.421 ******** 2026-01-30 06:32:15.481203 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481210 | orchestrator | 2026-01-30 06:32:15.481216 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:32:15.481222 | orchestrator | Friday 30 January 2026 06:31:59 +0000 (0:00:01.303) 0:43:52.724 ******** 2026-01-30 06:32:15.481229 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481235 | orchestrator | 2026-01-30 06:32:15.481241 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:32:15.481248 | orchestrator | Friday 30 January 2026 06:32:00 +0000 (0:00:01.268) 0:43:53.992 ******** 2026-01-30 06:32:15.481254 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481259 | orchestrator | 2026-01-30 06:32:15.481263 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:32:15.481267 | orchestrator | Friday 30 January 2026 06:32:01 +0000 (0:00:01.125) 0:43:55.118 ******** 2026-01-30 06:32:15.481271 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:32:15.481275 | orchestrator | 2026-01-30 06:32:15.481278 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:32:15.481282 | orchestrator | Friday 30 January 2026 06:32:03 +0000 (0:00:02.055) 0:43:57.174 ******** 2026-01-30 06:32:15.481286 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481290 | orchestrator | 2026-01-30 06:32:15.481293 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:32:15.481297 | orchestrator | Friday 30 January 2026 06:32:04 +0000 (0:00:01.144) 0:43:58.319 ******** 2026-01-30 06:32:15.481301 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481305 | orchestrator | 2026-01-30 06:32:15.481308 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:32:15.481312 | orchestrator | Friday 30 January 2026 06:32:05 +0000 (0:00:01.179) 0:43:59.498 ******** 2026-01-30 06:32:15.481316 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481319 | orchestrator | 2026-01-30 06:32:15.481323 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:32:15.481327 | orchestrator | Friday 30 January 2026 06:32:07 +0000 (0:00:01.211) 0:44:00.710 ******** 2026-01-30 06:32:15.481331 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481334 | orchestrator | 2026-01-30 06:32:15.481338 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:32:15.481342 | orchestrator | Friday 30 January 2026 06:32:08 +0000 (0:00:01.198) 0:44:01.909 ******** 2026-01-30 06:32:15.481346 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481350 | orchestrator | 2026-01-30 06:32:15.481354 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:32:15.481358 | orchestrator | Friday 30 January 2026 06:32:09 +0000 (0:00:01.198) 0:44:03.107 ******** 2026-01-30 06:32:15.481361 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481365 | orchestrator | 2026-01-30 06:32:15.481369 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:32:15.481378 | orchestrator | Friday 30 January 2026 06:32:10 +0000 (0:00:01.186) 0:44:04.294 ******** 2026-01-30 06:32:15.481382 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481386 | orchestrator | 2026-01-30 06:32:15.481389 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:32:15.481398 | orchestrator | Friday 30 January 2026 06:32:11 +0000 (0:00:01.100) 0:44:05.394 ******** 2026-01-30 06:32:15.481404 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481410 | orchestrator | 2026-01-30 06:32:15.481416 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:32:15.481421 | orchestrator | Friday 30 January 2026 06:32:12 +0000 (0:00:01.154) 0:44:06.548 ******** 2026-01-30 06:32:15.481427 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:15.481433 | orchestrator | 2026-01-30 06:32:15.481439 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:32:15.481446 | orchestrator | Friday 30 January 2026 06:32:14 +0000 (0:00:01.100) 0:44:07.649 ******** 2026-01-30 06:32:15.481453 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:15.481459 | orchestrator | 2026-01-30 06:32:15.481465 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:32:15.481472 | orchestrator | Friday 30 January 2026 06:32:15 +0000 (0:00:01.166) 0:44:08.815 ******** 2026-01-30 06:32:15.481479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.481493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}})  2026-01-30 06:32:15.485621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:32:15.485668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}})  2026-01-30 06:32:15.485689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:32:15.485714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}})  2026-01-30 06:32:15.485741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}})  2026-01-30 06:32:15.485749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:15.485762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:32:16.776644 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:16.776744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:32:16.776797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:32:16.776816 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:16.776834 | orchestrator | 2026-01-30 06:32:16.776851 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:32:16.776867 | orchestrator | Friday 30 January 2026 06:32:16 +0000 (0:00:01.350) 0:44:10.166 ******** 2026-01-30 06:32:16.776902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.776921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.776939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.776979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.777013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.777031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.777056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.777074 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:16.777101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064526 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064718 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:32:22.064760 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:32:22.064771 | orchestrator | 2026-01-30 06:32:22.064781 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:32:22.064791 | orchestrator | Friday 30 January 2026 06:32:17 +0000 (0:00:01.364) 0:44:11.530 ******** 2026-01-30 06:32:22.064800 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:22.064809 | orchestrator | 2026-01-30 06:32:22.064818 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:32:22.064827 | orchestrator | Friday 30 January 2026 06:32:19 +0000 (0:00:01.477) 0:44:13.008 ******** 2026-01-30 06:32:22.064835 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:22.064844 | orchestrator | 2026-01-30 06:32:22.064853 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:32:22.064861 | orchestrator | Friday 30 January 2026 06:32:20 +0000 (0:00:01.192) 0:44:14.200 ******** 2026-01-30 06:32:22.064877 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:32:22.064885 | orchestrator | 2026-01-30 06:32:22.064897 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:32:22.064924 | orchestrator | Friday 30 January 2026 06:32:22 +0000 (0:00:01.467) 0:44:15.667 ******** 2026-01-30 06:33:03.019059 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019147 | orchestrator | 2026-01-30 06:33:03.019156 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:33:03.019165 | orchestrator | Friday 30 January 2026 06:32:23 +0000 (0:00:01.095) 0:44:16.762 ******** 2026-01-30 06:33:03.019171 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019177 | orchestrator | 2026-01-30 06:33:03.019184 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:33:03.019190 | orchestrator | Friday 30 January 2026 06:32:24 +0000 (0:00:01.201) 0:44:17.964 ******** 2026-01-30 06:33:03.019196 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019202 | orchestrator | 2026-01-30 06:33:03.019208 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:33:03.019214 | orchestrator | Friday 30 January 2026 06:32:25 +0000 (0:00:01.238) 0:44:19.203 ******** 2026-01-30 06:33:03.019220 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 06:33:03.019227 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 06:33:03.019233 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 06:33:03.019238 | orchestrator | 2026-01-30 06:33:03.019244 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:33:03.019250 | orchestrator | Friday 30 January 2026 06:32:27 +0000 (0:00:01.451) 0:44:20.654 ******** 2026-01-30 06:33:03.019256 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:33:03.019262 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:33:03.019268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:33:03.019274 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019279 | orchestrator | 2026-01-30 06:33:03.019285 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:33:03.019291 | orchestrator | Friday 30 January 2026 06:32:27 +0000 (0:00:00.934) 0:44:21.588 ******** 2026-01-30 06:33:03.019297 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-01-30 06:33:03.019303 | orchestrator | 2026-01-30 06:33:03.019310 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:33:03.019317 | orchestrator | Friday 30 January 2026 06:32:29 +0000 (0:00:01.109) 0:44:22.698 ******** 2026-01-30 06:33:03.019323 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019329 | orchestrator | 2026-01-30 06:33:03.019334 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:33:03.019340 | orchestrator | Friday 30 January 2026 06:32:30 +0000 (0:00:01.172) 0:44:23.870 ******** 2026-01-30 06:33:03.019346 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019352 | orchestrator | 2026-01-30 06:33:03.019358 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:33:03.019374 | orchestrator | Friday 30 January 2026 06:32:31 +0000 (0:00:01.039) 0:44:24.910 ******** 2026-01-30 06:33:03.019381 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019387 | orchestrator | 2026-01-30 06:33:03.019393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:33:03.019398 | orchestrator | Friday 30 January 2026 06:32:32 +0000 (0:00:01.058) 0:44:25.969 ******** 2026-01-30 06:33:03.019404 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.019410 | orchestrator | 2026-01-30 06:33:03.019416 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:33:03.019422 | orchestrator | Friday 30 January 2026 06:32:33 +0000 (0:00:01.208) 0:44:27.178 ******** 2026-01-30 06:33:03.019446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:33:03.019452 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:33:03.019458 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:33:03.019463 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019469 | orchestrator | 2026-01-30 06:33:03.019475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:33:03.019481 | orchestrator | Friday 30 January 2026 06:32:34 +0000 (0:00:01.336) 0:44:28.514 ******** 2026-01-30 06:33:03.019486 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:33:03.019492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:33:03.019498 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:33:03.019504 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019509 | orchestrator | 2026-01-30 06:33:03.019515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:33:03.019521 | orchestrator | Friday 30 January 2026 06:32:36 +0000 (0:00:01.380) 0:44:29.895 ******** 2026-01-30 06:33:03.019526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:33:03.019532 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:33:03.019538 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:33:03.019543 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019549 | orchestrator | 2026-01-30 06:33:03.019592 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:33:03.019598 | orchestrator | Friday 30 January 2026 06:32:38 +0000 (0:00:01.745) 0:44:31.641 ******** 2026-01-30 06:33:03.019604 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.019611 | orchestrator | 2026-01-30 06:33:03.019618 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:33:03.019625 | orchestrator | Friday 30 January 2026 06:32:39 +0000 (0:00:01.189) 0:44:32.830 ******** 2026-01-30 06:33:03.019632 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 06:33:03.019638 | orchestrator | 2026-01-30 06:33:03.019645 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:33:03.019652 | orchestrator | Friday 30 January 2026 06:32:40 +0000 (0:00:01.743) 0:44:34.574 ******** 2026-01-30 06:33:03.019671 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:33:03.019678 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:33:03.019685 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:33:03.019692 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:33:03.019699 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:33:03.019706 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:33:03.019713 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:33:03.019719 | orchestrator | 2026-01-30 06:33:03.019726 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:33:03.019733 | orchestrator | Friday 30 January 2026 06:32:42 +0000 (0:00:01.784) 0:44:36.359 ******** 2026-01-30 06:33:03.019739 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:33:03.019746 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:33:03.019753 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:33:03.019760 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:33:03.019766 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:33:03.019780 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:33:03.019787 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:33:03.019794 | orchestrator | 2026-01-30 06:33:03.019801 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-01-30 06:33:03.019808 | orchestrator | Friday 30 January 2026 06:32:44 +0000 (0:00:02.232) 0:44:38.591 ******** 2026-01-30 06:33:03.019815 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.019821 | orchestrator | 2026-01-30 06:33:03.019828 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-01-30 06:33:03.019835 | orchestrator | Friday 30 January 2026 06:32:46 +0000 (0:00:01.156) 0:44:39.748 ******** 2026-01-30 06:33:03.019842 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.019849 | orchestrator | 2026-01-30 06:33:03.019855 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-01-30 06:33:03.019862 | orchestrator | Friday 30 January 2026 06:32:46 +0000 (0:00:00.758) 0:44:40.507 ******** 2026-01-30 06:33:03.019869 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.019876 | orchestrator | 2026-01-30 06:33:03.019882 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-01-30 06:33:03.019893 | orchestrator | Friday 30 January 2026 06:32:47 +0000 (0:00:00.909) 0:44:41.417 ******** 2026-01-30 06:33:03.019901 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-30 06:33:03.019908 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-30 06:33:03.019914 | orchestrator | 2026-01-30 06:33:03.019921 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:33:03.019928 | orchestrator | Friday 30 January 2026 06:32:51 +0000 (0:00:03.773) 0:44:45.191 ******** 2026-01-30 06:33:03.019935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-01-30 06:33:03.019942 | orchestrator | 2026-01-30 06:33:03.019948 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:33:03.019955 | orchestrator | Friday 30 January 2026 06:32:52 +0000 (0:00:01.093) 0:44:46.285 ******** 2026-01-30 06:33:03.019962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-01-30 06:33:03.019969 | orchestrator | 2026-01-30 06:33:03.019975 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:33:03.019980 | orchestrator | Friday 30 January 2026 06:32:53 +0000 (0:00:01.143) 0:44:47.428 ******** 2026-01-30 06:33:03.019986 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.019992 | orchestrator | 2026-01-30 06:33:03.019998 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:33:03.020003 | orchestrator | Friday 30 January 2026 06:32:54 +0000 (0:00:01.144) 0:44:48.572 ******** 2026-01-30 06:33:03.020009 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.020015 | orchestrator | 2026-01-30 06:33:03.020021 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:33:03.020027 | orchestrator | Friday 30 January 2026 06:32:56 +0000 (0:00:01.572) 0:44:50.145 ******** 2026-01-30 06:33:03.020032 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.020038 | orchestrator | 2026-01-30 06:33:03.020044 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:33:03.020050 | orchestrator | Friday 30 January 2026 06:32:58 +0000 (0:00:01.559) 0:44:51.704 ******** 2026-01-30 06:33:03.020055 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:03.020061 | orchestrator | 2026-01-30 06:33:03.020067 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:33:03.020073 | orchestrator | Friday 30 January 2026 06:32:59 +0000 (0:00:01.578) 0:44:53.283 ******** 2026-01-30 06:33:03.020078 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.020084 | orchestrator | 2026-01-30 06:33:03.020090 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:33:03.020096 | orchestrator | Friday 30 January 2026 06:33:00 +0000 (0:00:01.112) 0:44:54.396 ******** 2026-01-30 06:33:03.020106 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.020112 | orchestrator | 2026-01-30 06:33:03.020118 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:33:03.020123 | orchestrator | Friday 30 January 2026 06:33:01 +0000 (0:00:01.126) 0:44:55.523 ******** 2026-01-30 06:33:03.020129 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:03.020135 | orchestrator | 2026-01-30 06:33:03.020145 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:33:42.794287 | orchestrator | Friday 30 January 2026 06:33:02 +0000 (0:00:01.087) 0:44:56.610 ******** 2026-01-30 06:33:42.794404 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794421 | orchestrator | 2026-01-30 06:33:42.794434 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:33:42.794445 | orchestrator | Friday 30 January 2026 06:33:04 +0000 (0:00:01.528) 0:44:58.138 ******** 2026-01-30 06:33:42.794456 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794467 | orchestrator | 2026-01-30 06:33:42.794479 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:33:42.794489 | orchestrator | Friday 30 January 2026 06:33:06 +0000 (0:00:01.499) 0:44:59.638 ******** 2026-01-30 06:33:42.794500 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.794512 | orchestrator | 2026-01-30 06:33:42.794523 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:33:42.794534 | orchestrator | Friday 30 January 2026 06:33:06 +0000 (0:00:00.761) 0:45:00.400 ******** 2026-01-30 06:33:42.794545 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.794622 | orchestrator | 2026-01-30 06:33:42.794642 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:33:42.794660 | orchestrator | Friday 30 January 2026 06:33:07 +0000 (0:00:00.818) 0:45:01.218 ******** 2026-01-30 06:33:42.794675 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794686 | orchestrator | 2026-01-30 06:33:42.794697 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:33:42.794708 | orchestrator | Friday 30 January 2026 06:33:08 +0000 (0:00:00.833) 0:45:02.052 ******** 2026-01-30 06:33:42.794719 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794730 | orchestrator | 2026-01-30 06:33:42.794741 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:33:42.794752 | orchestrator | Friday 30 January 2026 06:33:09 +0000 (0:00:00.794) 0:45:02.847 ******** 2026-01-30 06:33:42.794763 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794774 | orchestrator | 2026-01-30 06:33:42.794786 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:33:42.794797 | orchestrator | Friday 30 January 2026 06:33:10 +0000 (0:00:00.788) 0:45:03.636 ******** 2026-01-30 06:33:42.794808 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.794819 | orchestrator | 2026-01-30 06:33:42.794832 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:33:42.794845 | orchestrator | Friday 30 January 2026 06:33:10 +0000 (0:00:00.790) 0:45:04.426 ******** 2026-01-30 06:33:42.794857 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.794869 | orchestrator | 2026-01-30 06:33:42.794881 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:33:42.794894 | orchestrator | Friday 30 January 2026 06:33:11 +0000 (0:00:00.828) 0:45:05.254 ******** 2026-01-30 06:33:42.794906 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.794918 | orchestrator | 2026-01-30 06:33:42.794946 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:33:42.794959 | orchestrator | Friday 30 January 2026 06:33:12 +0000 (0:00:00.761) 0:45:06.016 ******** 2026-01-30 06:33:42.794971 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.794984 | orchestrator | 2026-01-30 06:33:42.794996 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:33:42.795008 | orchestrator | Friday 30 January 2026 06:33:13 +0000 (0:00:00.795) 0:45:06.811 ******** 2026-01-30 06:33:42.795071 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.795086 | orchestrator | 2026-01-30 06:33:42.795098 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:33:42.795111 | orchestrator | Friday 30 January 2026 06:33:13 +0000 (0:00:00.778) 0:45:07.589 ******** 2026-01-30 06:33:42.795123 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795135 | orchestrator | 2026-01-30 06:33:42.795147 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:33:42.795160 | orchestrator | Friday 30 January 2026 06:33:14 +0000 (0:00:00.774) 0:45:08.363 ******** 2026-01-30 06:33:42.795172 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795184 | orchestrator | 2026-01-30 06:33:42.795197 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:33:42.795209 | orchestrator | Friday 30 January 2026 06:33:15 +0000 (0:00:00.804) 0:45:09.168 ******** 2026-01-30 06:33:42.795219 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795230 | orchestrator | 2026-01-30 06:33:42.795241 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:33:42.795252 | orchestrator | Friday 30 January 2026 06:33:16 +0000 (0:00:00.750) 0:45:09.918 ******** 2026-01-30 06:33:42.795263 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795273 | orchestrator | 2026-01-30 06:33:42.795284 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:33:42.795295 | orchestrator | Friday 30 January 2026 06:33:17 +0000 (0:00:00.761) 0:45:10.680 ******** 2026-01-30 06:33:42.795305 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795316 | orchestrator | 2026-01-30 06:33:42.795327 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:33:42.795337 | orchestrator | Friday 30 January 2026 06:33:17 +0000 (0:00:00.790) 0:45:11.470 ******** 2026-01-30 06:33:42.795348 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795358 | orchestrator | 2026-01-30 06:33:42.795369 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:33:42.795380 | orchestrator | Friday 30 January 2026 06:33:18 +0000 (0:00:00.772) 0:45:12.242 ******** 2026-01-30 06:33:42.795390 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795401 | orchestrator | 2026-01-30 06:33:42.795412 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:33:42.795423 | orchestrator | Friday 30 January 2026 06:33:19 +0000 (0:00:00.792) 0:45:13.035 ******** 2026-01-30 06:33:42.795434 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795444 | orchestrator | 2026-01-30 06:33:42.795455 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:33:42.795466 | orchestrator | Friday 30 January 2026 06:33:20 +0000 (0:00:00.796) 0:45:13.832 ******** 2026-01-30 06:33:42.795493 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795505 | orchestrator | 2026-01-30 06:33:42.795516 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:33:42.795526 | orchestrator | Friday 30 January 2026 06:33:21 +0000 (0:00:00.894) 0:45:14.726 ******** 2026-01-30 06:33:42.795537 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795548 | orchestrator | 2026-01-30 06:33:42.795584 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:33:42.795596 | orchestrator | Friday 30 January 2026 06:33:21 +0000 (0:00:00.758) 0:45:15.484 ******** 2026-01-30 06:33:42.795606 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795617 | orchestrator | 2026-01-30 06:33:42.795628 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:33:42.795638 | orchestrator | Friday 30 January 2026 06:33:22 +0000 (0:00:00.770) 0:45:16.255 ******** 2026-01-30 06:33:42.795649 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795660 | orchestrator | 2026-01-30 06:33:42.795670 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:33:42.795681 | orchestrator | Friday 30 January 2026 06:33:23 +0000 (0:00:00.766) 0:45:17.022 ******** 2026-01-30 06:33:42.795700 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.795711 | orchestrator | 2026-01-30 06:33:42.795722 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:33:42.795732 | orchestrator | Friday 30 January 2026 06:33:25 +0000 (0:00:01.602) 0:45:18.625 ******** 2026-01-30 06:33:42.795743 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.795754 | orchestrator | 2026-01-30 06:33:42.795764 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:33:42.795775 | orchestrator | Friday 30 January 2026 06:33:26 +0000 (0:00:01.923) 0:45:20.548 ******** 2026-01-30 06:33:42.795786 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-01-30 06:33:42.795797 | orchestrator | 2026-01-30 06:33:42.795808 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:33:42.795819 | orchestrator | Friday 30 January 2026 06:33:28 +0000 (0:00:01.139) 0:45:21.687 ******** 2026-01-30 06:33:42.795829 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795840 | orchestrator | 2026-01-30 06:33:42.795851 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:33:42.795861 | orchestrator | Friday 30 January 2026 06:33:29 +0000 (0:00:01.102) 0:45:22.790 ******** 2026-01-30 06:33:42.795872 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.795883 | orchestrator | 2026-01-30 06:33:42.795893 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:33:42.795904 | orchestrator | Friday 30 January 2026 06:33:30 +0000 (0:00:01.148) 0:45:23.939 ******** 2026-01-30 06:33:42.795915 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:33:42.795931 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:33:42.795942 | orchestrator | 2026-01-30 06:33:42.795953 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:33:42.795963 | orchestrator | Friday 30 January 2026 06:33:32 +0000 (0:00:01.784) 0:45:25.724 ******** 2026-01-30 06:33:42.795974 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.795985 | orchestrator | 2026-01-30 06:33:42.795995 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:33:42.796006 | orchestrator | Friday 30 January 2026 06:33:33 +0000 (0:00:01.551) 0:45:27.276 ******** 2026-01-30 06:33:42.796017 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.796027 | orchestrator | 2026-01-30 06:33:42.796038 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:33:42.796048 | orchestrator | Friday 30 January 2026 06:33:34 +0000 (0:00:01.161) 0:45:28.437 ******** 2026-01-30 06:33:42.796059 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.796070 | orchestrator | 2026-01-30 06:33:42.796080 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:33:42.796091 | orchestrator | Friday 30 January 2026 06:33:35 +0000 (0:00:00.833) 0:45:29.271 ******** 2026-01-30 06:33:42.796102 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.796112 | orchestrator | 2026-01-30 06:33:42.796123 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:33:42.796134 | orchestrator | Friday 30 January 2026 06:33:36 +0000 (0:00:00.766) 0:45:30.037 ******** 2026-01-30 06:33:42.796145 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-01-30 06:33:42.796155 | orchestrator | 2026-01-30 06:33:42.796166 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:33:42.796177 | orchestrator | Friday 30 January 2026 06:33:37 +0000 (0:00:01.122) 0:45:31.160 ******** 2026-01-30 06:33:42.796187 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:33:42.796198 | orchestrator | 2026-01-30 06:33:42.796209 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:33:42.796220 | orchestrator | Friday 30 January 2026 06:33:39 +0000 (0:00:01.732) 0:45:32.892 ******** 2026-01-30 06:33:42.796237 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:33:42.796248 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:33:42.796259 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:33:42.796269 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.796280 | orchestrator | 2026-01-30 06:33:42.796291 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:33:42.796301 | orchestrator | Friday 30 January 2026 06:33:40 +0000 (0:00:01.133) 0:45:34.026 ******** 2026-01-30 06:33:42.796312 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:33:42.796322 | orchestrator | 2026-01-30 06:33:42.796333 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:33:42.796344 | orchestrator | Friday 30 January 2026 06:33:41 +0000 (0:00:01.146) 0:45:35.172 ******** 2026-01-30 06:33:42.796361 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.646828 | orchestrator | 2026-01-30 06:34:25.646945 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:34:25.646962 | orchestrator | Friday 30 January 2026 06:33:42 +0000 (0:00:01.221) 0:45:36.393 ******** 2026-01-30 06:34:25.646974 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.646987 | orchestrator | 2026-01-30 06:34:25.646998 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:34:25.647009 | orchestrator | Friday 30 January 2026 06:33:43 +0000 (0:00:01.165) 0:45:37.559 ******** 2026-01-30 06:34:25.647020 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647031 | orchestrator | 2026-01-30 06:34:25.647042 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:34:25.647053 | orchestrator | Friday 30 January 2026 06:33:45 +0000 (0:00:01.133) 0:45:38.692 ******** 2026-01-30 06:34:25.647064 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647074 | orchestrator | 2026-01-30 06:34:25.647085 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:34:25.647096 | orchestrator | Friday 30 January 2026 06:33:45 +0000 (0:00:00.778) 0:45:39.471 ******** 2026-01-30 06:34:25.647107 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:34:25.647118 | orchestrator | 2026-01-30 06:34:25.647129 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:34:25.647141 | orchestrator | Friday 30 January 2026 06:33:47 +0000 (0:00:02.123) 0:45:41.594 ******** 2026-01-30 06:34:25.647152 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:34:25.647163 | orchestrator | 2026-01-30 06:34:25.647173 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:34:25.647184 | orchestrator | Friday 30 January 2026 06:33:48 +0000 (0:00:00.845) 0:45:42.440 ******** 2026-01-30 06:34:25.647195 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-01-30 06:34:25.647206 | orchestrator | 2026-01-30 06:34:25.647217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:34:25.647227 | orchestrator | Friday 30 January 2026 06:33:49 +0000 (0:00:01.112) 0:45:43.553 ******** 2026-01-30 06:34:25.647238 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647249 | orchestrator | 2026-01-30 06:34:25.647260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:34:25.647271 | orchestrator | Friday 30 January 2026 06:33:51 +0000 (0:00:01.110) 0:45:44.663 ******** 2026-01-30 06:34:25.647282 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647292 | orchestrator | 2026-01-30 06:34:25.647303 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:34:25.647314 | orchestrator | Friday 30 January 2026 06:33:52 +0000 (0:00:01.146) 0:45:45.810 ******** 2026-01-30 06:34:25.647325 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647336 | orchestrator | 2026-01-30 06:34:25.647366 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:34:25.647401 | orchestrator | Friday 30 January 2026 06:33:53 +0000 (0:00:01.141) 0:45:46.952 ******** 2026-01-30 06:34:25.647414 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647427 | orchestrator | 2026-01-30 06:34:25.647439 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:34:25.647450 | orchestrator | Friday 30 January 2026 06:33:54 +0000 (0:00:01.115) 0:45:48.068 ******** 2026-01-30 06:34:25.647461 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647471 | orchestrator | 2026-01-30 06:34:25.647482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:34:25.647493 | orchestrator | Friday 30 January 2026 06:33:55 +0000 (0:00:01.163) 0:45:49.231 ******** 2026-01-30 06:34:25.647503 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647514 | orchestrator | 2026-01-30 06:34:25.647525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:34:25.647536 | orchestrator | Friday 30 January 2026 06:33:56 +0000 (0:00:01.130) 0:45:50.362 ******** 2026-01-30 06:34:25.647546 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647810 | orchestrator | 2026-01-30 06:34:25.647821 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:34:25.647832 | orchestrator | Friday 30 January 2026 06:33:57 +0000 (0:00:01.152) 0:45:51.514 ******** 2026-01-30 06:34:25.647843 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.647862 | orchestrator | 2026-01-30 06:34:25.647881 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:34:25.647901 | orchestrator | Friday 30 January 2026 06:33:59 +0000 (0:00:01.173) 0:45:52.688 ******** 2026-01-30 06:34:25.647920 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:34:25.647939 | orchestrator | 2026-01-30 06:34:25.647958 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:34:25.647977 | orchestrator | Friday 30 January 2026 06:33:59 +0000 (0:00:00.780) 0:45:53.468 ******** 2026-01-30 06:34:25.647997 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-01-30 06:34:25.648019 | orchestrator | 2026-01-30 06:34:25.648038 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:34:25.648059 | orchestrator | Friday 30 January 2026 06:34:01 +0000 (0:00:01.293) 0:45:54.762 ******** 2026-01-30 06:34:25.648078 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-01-30 06:34:25.648099 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-30 06:34:25.648120 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-30 06:34:25.648140 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-30 06:34:25.648159 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-30 06:34:25.648171 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-30 06:34:25.648182 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-30 06:34:25.648192 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:34:25.648203 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:34:25.648237 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:34:25.648248 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:34:25.648259 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:34:25.648270 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:34:25.648280 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:34:25.648291 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-01-30 06:34:25.648302 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-01-30 06:34:25.648312 | orchestrator | 2026-01-30 06:34:25.648323 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:34:25.648334 | orchestrator | Friday 30 January 2026 06:34:07 +0000 (0:00:06.348) 0:46:01.110 ******** 2026-01-30 06:34:25.648362 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-01-30 06:34:25.648373 | orchestrator | 2026-01-30 06:34:25.648384 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:34:25.648395 | orchestrator | Friday 30 January 2026 06:34:08 +0000 (0:00:01.154) 0:46:02.265 ******** 2026-01-30 06:34:25.648406 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:34:25.648418 | orchestrator | 2026-01-30 06:34:25.648429 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:34:25.648439 | orchestrator | Friday 30 January 2026 06:34:10 +0000 (0:00:01.481) 0:46:03.746 ******** 2026-01-30 06:34:25.648450 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:34:25.648461 | orchestrator | 2026-01-30 06:34:25.648472 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:34:25.648484 | orchestrator | Friday 30 January 2026 06:34:11 +0000 (0:00:01.623) 0:46:05.370 ******** 2026-01-30 06:34:25.648503 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648521 | orchestrator | 2026-01-30 06:34:25.648541 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:34:25.648585 | orchestrator | Friday 30 January 2026 06:34:12 +0000 (0:00:00.794) 0:46:06.164 ******** 2026-01-30 06:34:25.648596 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648607 | orchestrator | 2026-01-30 06:34:25.648618 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:34:25.648628 | orchestrator | Friday 30 January 2026 06:34:13 +0000 (0:00:00.796) 0:46:06.961 ******** 2026-01-30 06:34:25.648639 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648649 | orchestrator | 2026-01-30 06:34:25.648672 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:34:25.648691 | orchestrator | Friday 30 January 2026 06:34:14 +0000 (0:00:00.775) 0:46:07.736 ******** 2026-01-30 06:34:25.648709 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648727 | orchestrator | 2026-01-30 06:34:25.648746 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:34:25.648764 | orchestrator | Friday 30 January 2026 06:34:14 +0000 (0:00:00.795) 0:46:08.532 ******** 2026-01-30 06:34:25.648783 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648801 | orchestrator | 2026-01-30 06:34:25.648820 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:34:25.648839 | orchestrator | Friday 30 January 2026 06:34:15 +0000 (0:00:00.786) 0:46:09.318 ******** 2026-01-30 06:34:25.648857 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648876 | orchestrator | 2026-01-30 06:34:25.648887 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:34:25.648898 | orchestrator | Friday 30 January 2026 06:34:16 +0000 (0:00:00.793) 0:46:10.112 ******** 2026-01-30 06:34:25.648909 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648920 | orchestrator | 2026-01-30 06:34:25.648931 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:34:25.648942 | orchestrator | Friday 30 January 2026 06:34:17 +0000 (0:00:00.777) 0:46:10.890 ******** 2026-01-30 06:34:25.648952 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.648963 | orchestrator | 2026-01-30 06:34:25.648973 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:34:25.648984 | orchestrator | Friday 30 January 2026 06:34:18 +0000 (0:00:00.887) 0:46:11.778 ******** 2026-01-30 06:34:25.648994 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.649005 | orchestrator | 2026-01-30 06:34:25.649021 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:34:25.649051 | orchestrator | Friday 30 January 2026 06:34:18 +0000 (0:00:00.771) 0:46:12.549 ******** 2026-01-30 06:34:25.649070 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:34:25.649088 | orchestrator | 2026-01-30 06:34:25.649100 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:34:25.649110 | orchestrator | Friday 30 January 2026 06:34:19 +0000 (0:00:00.787) 0:46:13.337 ******** 2026-01-30 06:34:25.649121 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:34:25.649132 | orchestrator | 2026-01-30 06:34:25.649142 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:34:25.649153 | orchestrator | Friday 30 January 2026 06:34:20 +0000 (0:00:00.837) 0:46:14.175 ******** 2026-01-30 06:34:25.649163 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:34:25.649174 | orchestrator | 2026-01-30 06:34:25.649184 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:34:25.649195 | orchestrator | Friday 30 January 2026 06:34:24 +0000 (0:00:04.198) 0:46:18.374 ******** 2026-01-30 06:34:25.649215 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:35:07.349883 | orchestrator | 2026-01-30 06:35:07.349980 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:35:07.349993 | orchestrator | Friday 30 January 2026 06:34:25 +0000 (0:00:00.872) 0:46:19.246 ******** 2026-01-30 06:35:07.350003 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-01-30 06:35:07.350054 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-01-30 06:35:07.350061 | orchestrator | 2026-01-30 06:35:07.350066 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:35:07.350071 | orchestrator | Friday 30 January 2026 06:34:33 +0000 (0:00:07.398) 0:46:26.645 ******** 2026-01-30 06:35:07.350075 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350080 | orchestrator | 2026-01-30 06:35:07.350084 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:35:07.350088 | orchestrator | Friday 30 January 2026 06:34:33 +0000 (0:00:00.793) 0:46:27.439 ******** 2026-01-30 06:35:07.350091 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350095 | orchestrator | 2026-01-30 06:35:07.350099 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:35:07.350104 | orchestrator | Friday 30 January 2026 06:34:34 +0000 (0:00:00.796) 0:46:28.235 ******** 2026-01-30 06:35:07.350108 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350112 | orchestrator | 2026-01-30 06:35:07.350116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:35:07.350119 | orchestrator | Friday 30 January 2026 06:34:35 +0000 (0:00:00.781) 0:46:29.017 ******** 2026-01-30 06:35:07.350123 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350127 | orchestrator | 2026-01-30 06:35:07.350131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:35:07.350134 | orchestrator | Friday 30 January 2026 06:34:36 +0000 (0:00:00.791) 0:46:29.809 ******** 2026-01-30 06:35:07.350138 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350142 | orchestrator | 2026-01-30 06:35:07.350157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:35:07.350161 | orchestrator | Friday 30 January 2026 06:34:36 +0000 (0:00:00.797) 0:46:30.607 ******** 2026-01-30 06:35:07.350182 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350187 | orchestrator | 2026-01-30 06:35:07.350191 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:35:07.350194 | orchestrator | Friday 30 January 2026 06:34:37 +0000 (0:00:00.903) 0:46:31.510 ******** 2026-01-30 06:35:07.350198 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:35:07.350203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:35:07.350206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:35:07.350210 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350214 | orchestrator | 2026-01-30 06:35:07.350218 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:35:07.350221 | orchestrator | Friday 30 January 2026 06:34:39 +0000 (0:00:01.667) 0:46:33.177 ******** 2026-01-30 06:35:07.350225 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:35:07.350229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:35:07.350233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:35:07.350236 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350240 | orchestrator | 2026-01-30 06:35:07.350244 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:35:07.350248 | orchestrator | Friday 30 January 2026 06:34:40 +0000 (0:00:01.108) 0:46:34.286 ******** 2026-01-30 06:35:07.350251 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:35:07.350255 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:35:07.350259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:35:07.350262 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350266 | orchestrator | 2026-01-30 06:35:07.350270 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:35:07.350274 | orchestrator | Friday 30 January 2026 06:34:41 +0000 (0:00:01.083) 0:46:35.370 ******** 2026-01-30 06:35:07.350277 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350281 | orchestrator | 2026-01-30 06:35:07.350285 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:35:07.350289 | orchestrator | Friday 30 January 2026 06:34:42 +0000 (0:00:00.799) 0:46:36.169 ******** 2026-01-30 06:35:07.350292 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 06:35:07.350297 | orchestrator | 2026-01-30 06:35:07.350301 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:35:07.350304 | orchestrator | Friday 30 January 2026 06:34:43 +0000 (0:00:01.036) 0:46:37.205 ******** 2026-01-30 06:35:07.350308 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350312 | orchestrator | 2026-01-30 06:35:07.350315 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-30 06:35:07.350319 | orchestrator | Friday 30 January 2026 06:34:45 +0000 (0:00:01.440) 0:46:38.646 ******** 2026-01-30 06:35:07.350323 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350327 | orchestrator | 2026-01-30 06:35:07.350341 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-30 06:35:07.350345 | orchestrator | Friday 30 January 2026 06:34:45 +0000 (0:00:00.818) 0:46:39.464 ******** 2026-01-30 06:35:07.350349 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:35:07.350354 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:35:07.350357 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:35:07.350361 | orchestrator | 2026-01-30 06:35:07.350365 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-30 06:35:07.350368 | orchestrator | Friday 30 January 2026 06:34:47 +0000 (0:00:01.656) 0:46:41.121 ******** 2026-01-30 06:35:07.350372 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-01-30 06:35:07.350380 | orchestrator | 2026-01-30 06:35:07.350383 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-30 06:35:07.350387 | orchestrator | Friday 30 January 2026 06:34:48 +0000 (0:00:01.165) 0:46:42.287 ******** 2026-01-30 06:35:07.350391 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350395 | orchestrator | 2026-01-30 06:35:07.350398 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-30 06:35:07.350402 | orchestrator | Friday 30 January 2026 06:34:49 +0000 (0:00:01.117) 0:46:43.404 ******** 2026-01-30 06:35:07.350406 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350409 | orchestrator | 2026-01-30 06:35:07.350413 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-30 06:35:07.350417 | orchestrator | Friday 30 January 2026 06:34:51 +0000 (0:00:01.248) 0:46:44.654 ******** 2026-01-30 06:35:07.350420 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350424 | orchestrator | 2026-01-30 06:35:07.350428 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-30 06:35:07.350432 | orchestrator | Friday 30 January 2026 06:34:52 +0000 (0:00:01.471) 0:46:46.125 ******** 2026-01-30 06:35:07.350435 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350439 | orchestrator | 2026-01-30 06:35:07.350443 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-30 06:35:07.350448 | orchestrator | Friday 30 January 2026 06:34:53 +0000 (0:00:01.173) 0:46:47.298 ******** 2026-01-30 06:35:07.350452 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-30 06:35:07.350457 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-30 06:35:07.350461 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-30 06:35:07.350469 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-30 06:35:07.350473 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-30 06:35:07.350478 | orchestrator | 2026-01-30 06:35:07.350482 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-30 06:35:07.350487 | orchestrator | Friday 30 January 2026 06:34:56 +0000 (0:00:02.599) 0:46:49.898 ******** 2026-01-30 06:35:07.350491 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350496 | orchestrator | 2026-01-30 06:35:07.350500 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-30 06:35:07.350504 | orchestrator | Friday 30 January 2026 06:34:57 +0000 (0:00:00.754) 0:46:50.652 ******** 2026-01-30 06:35:07.350509 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-01-30 06:35:07.350513 | orchestrator | 2026-01-30 06:35:07.350518 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-30 06:35:07.350522 | orchestrator | Friday 30 January 2026 06:34:58 +0000 (0:00:01.124) 0:46:51.777 ******** 2026-01-30 06:35:07.350526 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-30 06:35:07.350531 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-30 06:35:07.350535 | orchestrator | 2026-01-30 06:35:07.350540 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-30 06:35:07.350559 | orchestrator | Friday 30 January 2026 06:34:59 +0000 (0:00:01.790) 0:46:53.568 ******** 2026-01-30 06:35:07.350566 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:35:07.350571 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 06:35:07.350575 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:35:07.350579 | orchestrator | 2026-01-30 06:35:07.350584 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:35:07.350590 | orchestrator | Friday 30 January 2026 06:35:03 +0000 (0:00:03.292) 0:46:56.861 ******** 2026-01-30 06:35:07.350596 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-01-30 06:35:07.350607 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 06:35:07.350613 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:35:07.350619 | orchestrator | 2026-01-30 06:35:07.350625 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-30 06:35:07.350631 | orchestrator | Friday 30 January 2026 06:35:04 +0000 (0:00:01.658) 0:46:58.520 ******** 2026-01-30 06:35:07.350637 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350644 | orchestrator | 2026-01-30 06:35:07.350649 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-30 06:35:07.350655 | orchestrator | Friday 30 January 2026 06:35:05 +0000 (0:00:00.895) 0:46:59.416 ******** 2026-01-30 06:35:07.350661 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350667 | orchestrator | 2026-01-30 06:35:07.350673 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-30 06:35:07.350678 | orchestrator | Friday 30 January 2026 06:35:06 +0000 (0:00:00.755) 0:47:00.171 ******** 2026-01-30 06:35:07.350684 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:35:07.350690 | orchestrator | 2026-01-30 06:35:07.350701 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-30 06:37:31.987668 | orchestrator | Friday 30 January 2026 06:35:07 +0000 (0:00:00.778) 0:47:00.949 ******** 2026-01-30 06:37:31.987794 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-01-30 06:37:31.987802 | orchestrator | 2026-01-30 06:37:31.987808 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-30 06:37:31.987813 | orchestrator | Friday 30 January 2026 06:35:08 +0000 (0:00:01.311) 0:47:02.261 ******** 2026-01-30 06:37:31.987818 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.987824 | orchestrator | 2026-01-30 06:37:31.987829 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-30 06:37:31.987833 | orchestrator | Friday 30 January 2026 06:35:10 +0000 (0:00:01.492) 0:47:03.754 ******** 2026-01-30 06:37:31.987838 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.987842 | orchestrator | 2026-01-30 06:37:31.987846 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-30 06:37:31.987850 | orchestrator | Friday 30 January 2026 06:35:13 +0000 (0:00:03.534) 0:47:07.288 ******** 2026-01-30 06:37:31.987854 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-01-30 06:37:31.987858 | orchestrator | 2026-01-30 06:37:31.987862 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-30 06:37:31.987866 | orchestrator | Friday 30 January 2026 06:35:14 +0000 (0:00:01.152) 0:47:08.441 ******** 2026-01-30 06:37:31.987870 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.987874 | orchestrator | 2026-01-30 06:37:31.987879 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-30 06:37:31.987883 | orchestrator | Friday 30 January 2026 06:35:16 +0000 (0:00:01.998) 0:47:10.439 ******** 2026-01-30 06:37:31.987887 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.987891 | orchestrator | 2026-01-30 06:37:31.987900 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-30 06:37:31.987904 | orchestrator | Friday 30 January 2026 06:35:18 +0000 (0:00:01.971) 0:47:12.411 ******** 2026-01-30 06:37:31.987908 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.987913 | orchestrator | 2026-01-30 06:37:31.987917 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-30 06:37:31.987921 | orchestrator | Friday 30 January 2026 06:35:21 +0000 (0:00:02.300) 0:47:14.711 ******** 2026-01-30 06:37:31.987925 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.987930 | orchestrator | 2026-01-30 06:37:31.987934 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-30 06:37:31.987939 | orchestrator | Friday 30 January 2026 06:35:22 +0000 (0:00:01.127) 0:47:15.838 ******** 2026-01-30 06:37:31.987943 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.987947 | orchestrator | 2026-01-30 06:37:31.987968 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-30 06:37:31.987991 | orchestrator | Friday 30 January 2026 06:35:23 +0000 (0:00:01.135) 0:47:16.973 ******** 2026-01-30 06:37:31.987995 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-30 06:37:31.987999 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-30 06:37:31.988004 | orchestrator | 2026-01-30 06:37:31.988008 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-30 06:37:31.988012 | orchestrator | Friday 30 January 2026 06:35:25 +0000 (0:00:01.845) 0:47:18.819 ******** 2026-01-30 06:37:31.988016 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-01-30 06:37:31.988020 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-01-30 06:37:31.988027 | orchestrator | 2026-01-30 06:37:31.988035 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-30 06:37:31.988042 | orchestrator | Friday 30 January 2026 06:35:28 +0000 (0:00:02.931) 0:47:21.750 ******** 2026-01-30 06:37:31.988049 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-01-30 06:37:31.988056 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-01-30 06:37:31.988063 | orchestrator | 2026-01-30 06:37:31.988070 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-30 06:37:31.988077 | orchestrator | Friday 30 January 2026 06:35:32 +0000 (0:00:04.354) 0:47:26.105 ******** 2026-01-30 06:37:31.988084 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.988092 | orchestrator | 2026-01-30 06:37:31.988100 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-30 06:37:31.988107 | orchestrator | Friday 30 January 2026 06:35:33 +0000 (0:00:01.371) 0:47:27.476 ******** 2026-01-30 06:37:31.988111 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-30 06:37:31.988117 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:37:31.988122 | orchestrator | 2026-01-30 06:37:31.988126 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-30 06:37:31.988131 | orchestrator | Friday 30 January 2026 06:35:46 +0000 (0:00:12.948) 0:47:40.424 ******** 2026-01-30 06:37:31.988135 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.988139 | orchestrator | 2026-01-30 06:37:31.988143 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-01-30 06:37:31.988147 | orchestrator | Friday 30 January 2026 06:35:47 +0000 (0:00:00.834) 0:47:41.259 ******** 2026-01-30 06:37:31.988151 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.988155 | orchestrator | 2026-01-30 06:37:31.988159 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-01-30 06:37:31.988164 | orchestrator | Friday 30 January 2026 06:35:48 +0000 (0:00:00.803) 0:47:42.063 ******** 2026-01-30 06:37:31.988168 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:37:31.988172 | orchestrator | 2026-01-30 06:37:31.988176 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-01-30 06:37:31.988180 | orchestrator | Friday 30 January 2026 06:35:49 +0000 (0:00:00.757) 0:47:42.820 ******** 2026-01-30 06:37:31.988184 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-01-30 06:37:31.988188 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:37:31.988192 | orchestrator | 2026-01-30 06:37:31.988211 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-01-30 06:37:31.988216 | orchestrator | 2026-01-30 06:37:31.988220 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:37:31.988224 | orchestrator | Friday 30 January 2026 06:35:54 +0000 (0:00:05.642) 0:47:48.463 ******** 2026-01-30 06:37:31.988228 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:37:31.988232 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:37:31.988236 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.988240 | orchestrator | 2026-01-30 06:37:31.988244 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:37:31.988254 | orchestrator | Friday 30 January 2026 06:35:56 +0000 (0:00:01.532) 0:47:49.995 ******** 2026-01-30 06:37:31.988258 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:37:31.988262 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:37:31.988266 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:37:31.988270 | orchestrator | 2026-01-30 06:37:31.988274 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-01-30 06:37:31.988278 | orchestrator | Friday 30 January 2026 06:35:57 +0000 (0:00:01.461) 0:47:51.456 ******** 2026-01-30 06:37:31.988282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-01-30 06:37:31.988287 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-01-30 06:37:31.988291 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-01-30 06:37:31.988295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-01-30 06:37:31.988302 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-01-30 06:37:31.988306 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-01-30 06:37:31.988310 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-01-30 06:37:31.988314 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-01-30 06:37:31.988318 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-01-30 06:37:31.988326 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-01-30 06:37:31.988330 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-01-30 06:37:31.988334 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-01-30 06:37:31.988338 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-01-30 06:37:31.988342 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-01-30 06:37:31.988346 | orchestrator | 2026-01-30 06:37:31.988351 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-01-30 06:37:31.988355 | orchestrator | Friday 30 January 2026 06:37:15 +0000 (0:01:17.501) 0:49:08.957 ******** 2026-01-30 06:37:31.988359 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-01-30 06:37:31.988363 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-01-30 06:37:31.988367 | orchestrator | 2026-01-30 06:37:31.988371 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-01-30 06:37:31.988375 | orchestrator | Friday 30 January 2026 06:37:21 +0000 (0:00:05.703) 0:49:14.661 ******** 2026-01-30 06:37:31.988379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:37:31.988383 | orchestrator | 2026-01-30 06:37:31.988387 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-01-30 06:37:31.988391 | orchestrator | 2026-01-30 06:37:31.988398 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:37:31.988405 | orchestrator | Friday 30 January 2026 06:37:24 +0000 (0:00:03.302) 0:49:17.963 ******** 2026-01-30 06:37:31.988411 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-01-30 06:37:31.988419 | orchestrator | 2026-01-30 06:37:31.988425 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:37:31.988446 | orchestrator | Friday 30 January 2026 06:37:25 +0000 (0:00:01.172) 0:49:19.136 ******** 2026-01-30 06:37:31.988460 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:31.988473 | orchestrator | 2026-01-30 06:37:31.988480 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:37:31.988486 | orchestrator | Friday 30 January 2026 06:37:26 +0000 (0:00:01.475) 0:49:20.612 ******** 2026-01-30 06:37:31.988493 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:31.988499 | orchestrator | 2026-01-30 06:37:31.988503 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:37:31.988508 | orchestrator | Friday 30 January 2026 06:37:28 +0000 (0:00:01.223) 0:49:21.836 ******** 2026-01-30 06:37:31.988512 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:31.988516 | orchestrator | 2026-01-30 06:37:31.988520 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:37:31.988524 | orchestrator | Friday 30 January 2026 06:37:29 +0000 (0:00:01.481) 0:49:23.317 ******** 2026-01-30 06:37:31.988528 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:31.988532 | orchestrator | 2026-01-30 06:37:31.988536 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:37:31.988563 | orchestrator | Friday 30 January 2026 06:37:30 +0000 (0:00:01.137) 0:49:24.455 ******** 2026-01-30 06:37:31.988572 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.389342 | orchestrator | 2026-01-30 06:37:57.389477 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:37:57.389500 | orchestrator | Friday 30 January 2026 06:37:31 +0000 (0:00:01.128) 0:49:25.583 ******** 2026-01-30 06:37:57.389514 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.389531 | orchestrator | 2026-01-30 06:37:57.389627 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:37:57.389641 | orchestrator | Friday 30 January 2026 06:37:33 +0000 (0:00:01.170) 0:49:26.754 ******** 2026-01-30 06:37:57.389655 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.389670 | orchestrator | 2026-01-30 06:37:57.389684 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:37:57.389698 | orchestrator | Friday 30 January 2026 06:37:34 +0000 (0:00:01.141) 0:49:27.896 ******** 2026-01-30 06:37:57.389712 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.389726 | orchestrator | 2026-01-30 06:37:57.389742 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:37:57.389757 | orchestrator | Friday 30 January 2026 06:37:35 +0000 (0:00:01.164) 0:49:29.061 ******** 2026-01-30 06:37:57.389772 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:37:57.389789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:37:57.389805 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:37:57.389820 | orchestrator | 2026-01-30 06:37:57.389835 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:37:57.389849 | orchestrator | Friday 30 January 2026 06:37:37 +0000 (0:00:01.785) 0:49:30.846 ******** 2026-01-30 06:37:57.389864 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.389879 | orchestrator | 2026-01-30 06:37:57.389893 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:37:57.389909 | orchestrator | Friday 30 January 2026 06:37:38 +0000 (0:00:01.315) 0:49:32.162 ******** 2026-01-30 06:37:57.389925 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:37:57.389942 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:37:57.389957 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:37:57.389972 | orchestrator | 2026-01-30 06:37:57.389988 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:37:57.390005 | orchestrator | Friday 30 January 2026 06:37:41 +0000 (0:00:03.428) 0:49:35.591 ******** 2026-01-30 06:37:57.390098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 06:37:57.390127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 06:37:57.390160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 06:37:57.390171 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.390180 | orchestrator | 2026-01-30 06:37:57.390191 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:37:57.390201 | orchestrator | Friday 30 January 2026 06:37:43 +0000 (0:00:01.444) 0:49:37.035 ******** 2026-01-30 06:37:57.390212 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390247 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.390257 | orchestrator | 2026-01-30 06:37:57.390265 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:37:57.390274 | orchestrator | Friday 30 January 2026 06:37:45 +0000 (0:00:01.986) 0:49:39.022 ******** 2026-01-30 06:37:57.390285 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390299 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:37:57.390338 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.390347 | orchestrator | 2026-01-30 06:37:57.390356 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:37:57.390364 | orchestrator | Friday 30 January 2026 06:37:46 +0000 (0:00:01.209) 0:49:40.231 ******** 2026-01-30 06:37:57.390375 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:37:39.123650', 'end': '2026-01-30 06:37:39.163314', 'delta': '0:00:00.039664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:37:57.390394 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:37:39.722354', 'end': '2026-01-30 06:37:39.791427', 'delta': '0:00:00.069073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:37:57.390410 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:37:40.799539', 'end': '2026-01-30 06:37:40.841049', 'delta': '0:00:00.041510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:37:57.390419 | orchestrator | 2026-01-30 06:37:57.390428 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:37:57.390437 | orchestrator | Friday 30 January 2026 06:37:47 +0000 (0:00:01.352) 0:49:41.584 ******** 2026-01-30 06:37:57.390446 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.390454 | orchestrator | 2026-01-30 06:37:57.390463 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:37:57.390472 | orchestrator | Friday 30 January 2026 06:37:49 +0000 (0:00:01.290) 0:49:42.875 ******** 2026-01-30 06:37:57.390481 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.390489 | orchestrator | 2026-01-30 06:37:57.390498 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:37:57.390507 | orchestrator | Friday 30 January 2026 06:37:50 +0000 (0:00:01.290) 0:49:44.165 ******** 2026-01-30 06:37:57.390515 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.390524 | orchestrator | 2026-01-30 06:37:57.390532 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:37:57.390570 | orchestrator | Friday 30 January 2026 06:37:51 +0000 (0:00:01.149) 0:49:45.315 ******** 2026-01-30 06:37:57.390580 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.390588 | orchestrator | 2026-01-30 06:37:57.390597 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:37:57.390605 | orchestrator | Friday 30 January 2026 06:37:53 +0000 (0:00:02.089) 0:49:47.405 ******** 2026-01-30 06:37:57.390614 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:37:57.390623 | orchestrator | 2026-01-30 06:37:57.390631 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:37:57.390640 | orchestrator | Friday 30 January 2026 06:37:54 +0000 (0:00:01.177) 0:49:48.583 ******** 2026-01-30 06:37:57.390648 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:37:57.390657 | orchestrator | 2026-01-30 06:37:57.390665 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:37:57.390674 | orchestrator | Friday 30 January 2026 06:37:56 +0000 (0:00:01.200) 0:49:49.783 ******** 2026-01-30 06:37:57.390689 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956038 | orchestrator | 2026-01-30 06:38:07.956121 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:38:07.956128 | orchestrator | Friday 30 January 2026 06:37:57 +0000 (0:00:01.204) 0:49:50.987 ******** 2026-01-30 06:38:07.956133 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956138 | orchestrator | 2026-01-30 06:38:07.956142 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:38:07.956166 | orchestrator | Friday 30 January 2026 06:37:58 +0000 (0:00:01.143) 0:49:52.131 ******** 2026-01-30 06:38:07.956173 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956179 | orchestrator | 2026-01-30 06:38:07.956186 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:38:07.956192 | orchestrator | Friday 30 January 2026 06:37:59 +0000 (0:00:01.129) 0:49:53.261 ******** 2026-01-30 06:38:07.956198 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956204 | orchestrator | 2026-01-30 06:38:07.956210 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:38:07.956217 | orchestrator | Friday 30 January 2026 06:38:00 +0000 (0:00:01.098) 0:49:54.359 ******** 2026-01-30 06:38:07.956223 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956229 | orchestrator | 2026-01-30 06:38:07.956235 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:38:07.956242 | orchestrator | Friday 30 January 2026 06:38:01 +0000 (0:00:01.129) 0:49:55.489 ******** 2026-01-30 06:38:07.956248 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956254 | orchestrator | 2026-01-30 06:38:07.956260 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:38:07.956267 | orchestrator | Friday 30 January 2026 06:38:03 +0000 (0:00:01.147) 0:49:56.636 ******** 2026-01-30 06:38:07.956273 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956279 | orchestrator | 2026-01-30 06:38:07.956285 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:38:07.956292 | orchestrator | Friday 30 January 2026 06:38:04 +0000 (0:00:01.192) 0:49:57.829 ******** 2026-01-30 06:38:07.956299 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956304 | orchestrator | 2026-01-30 06:38:07.956310 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:38:07.956316 | orchestrator | Friday 30 January 2026 06:38:05 +0000 (0:00:01.114) 0:49:58.943 ******** 2026-01-30 06:38:07.956338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:38:07.956370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:38:07.956430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:38:07.956448 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:07.956454 | orchestrator | 2026-01-30 06:38:07.956460 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:38:07.956467 | orchestrator | Friday 30 January 2026 06:38:06 +0000 (0:00:01.300) 0:50:00.244 ******** 2026-01-30 06:38:07.956480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221027 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221149 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221193 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221231 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221334 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6f62995b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f62995b-1598-4105-b2bc-5f2a0c02af64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221358 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:38:12.221404 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:38:12.221423 | orchestrator | 2026-01-30 06:38:12.221442 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:38:12.221461 | orchestrator | Friday 30 January 2026 06:38:07 +0000 (0:00:01.313) 0:50:01.557 ******** 2026-01-30 06:38:12.221477 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:38:12.221495 | orchestrator | 2026-01-30 06:38:12.221511 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:38:12.221526 | orchestrator | Friday 30 January 2026 06:38:09 +0000 (0:00:01.569) 0:50:03.127 ******** 2026-01-30 06:38:12.221571 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:38:12.221589 | orchestrator | 2026-01-30 06:38:12.221606 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:38:12.221622 | orchestrator | Friday 30 January 2026 06:38:10 +0000 (0:00:01.166) 0:50:04.294 ******** 2026-01-30 06:38:12.221638 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:38:12.221654 | orchestrator | 2026-01-30 06:38:12.221670 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:38:12.221696 | orchestrator | Friday 30 January 2026 06:38:12 +0000 (0:00:01.528) 0:50:05.823 ******** 2026-01-30 06:39:06.021350 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:39:06.021492 | orchestrator | 2026-01-30 06:39:06.021521 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:39:06.021607 | orchestrator | Friday 30 January 2026 06:38:13 +0000 (0:00:01.132) 0:50:06.955 ******** 2026-01-30 06:39:06.021629 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:39:06.021649 | orchestrator | 2026-01-30 06:39:06.021667 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:39:06.021684 | orchestrator | Friday 30 January 2026 06:38:14 +0000 (0:00:01.234) 0:50:08.190 ******** 2026-01-30 06:39:06.021701 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:39:06.021718 | orchestrator | 2026-01-30 06:39:06.021734 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:39:06.021751 | orchestrator | Friday 30 January 2026 06:38:15 +0000 (0:00:01.156) 0:50:09.346 ******** 2026-01-30 06:39:06.021769 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:39:06.021789 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-30 06:39:06.021808 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-30 06:39:06.021825 | orchestrator | 2026-01-30 06:39:06.021844 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:39:06.021865 | orchestrator | Friday 30 January 2026 06:38:17 +0000 (0:00:02.144) 0:50:11.491 ******** 2026-01-30 06:39:06.021885 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-30 06:39:06.021904 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-30 06:39:06.021924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-30 06:39:06.021944 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:39:06.021964 | orchestrator | 2026-01-30 06:39:06.021984 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:39:06.022003 | orchestrator | Friday 30 January 2026 06:38:19 +0000 (0:00:01.237) 0:50:12.728 ******** 2026-01-30 06:39:06.022111 | orchestrator | skipping: [testbed-node-0] 2026-01-30 06:39:06.022156 | orchestrator | 2026-01-30 06:39:06.022168 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:39:06.022195 | orchestrator | Friday 30 January 2026 06:38:20 +0000 (0:00:01.126) 0:50:13.855 ******** 2026-01-30 06:39:06.022206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:39:06.022218 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:39:06.022230 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:39:06.022240 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:39:06.022252 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:39:06.022262 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:39:06.022273 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:39:06.022284 | orchestrator | 2026-01-30 06:39:06.022295 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:39:06.022305 | orchestrator | Friday 30 January 2026 06:38:22 +0000 (0:00:02.346) 0:50:16.202 ******** 2026-01-30 06:39:06.022316 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-30 06:39:06.022327 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:39:06.022338 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:39:06.022348 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:39:06.022359 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:39:06.022370 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:39:06.022382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:39:06.022392 | orchestrator | 2026-01-30 06:39:06.022403 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-01-30 06:39:06.022414 | orchestrator | Friday 30 January 2026 06:38:25 +0000 (0:00:02.687) 0:50:18.890 ******** 2026-01-30 06:39:06.022425 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:39:06.022436 | orchestrator | 2026-01-30 06:39:06.022447 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-01-30 06:39:06.022458 | orchestrator | Friday 30 January 2026 06:38:28 +0000 (0:00:03.297) 0:50:22.188 ******** 2026-01-30 06:39:06.022468 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:39:06.022479 | orchestrator | 2026-01-30 06:39:06.022490 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-01-30 06:39:06.022501 | orchestrator | Friday 30 January 2026 06:38:31 +0000 (0:00:03.084) 0:50:25.273 ******** 2026-01-30 06:39:06.022512 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:39:06.022522 | orchestrator | 2026-01-30 06:39:06.022593 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-01-30 06:39:06.022617 | orchestrator | Friday 30 January 2026 06:38:33 +0000 (0:00:02.238) 0:50:27.512 ******** 2026-01-30 06:39:06.022668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4739', 'value': {'gid': 4739, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 3, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/3774265369', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 3774265369}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 3774265369}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-01-30 06:39:06.022696 | orchestrator | 2026-01-30 06:39:06.022708 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-01-30 06:39:06.022719 | orchestrator | Friday 30 January 2026 06:38:35 +0000 (0:00:01.174) 0:50:28.687 ******** 2026-01-30 06:39:06.022729 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-01-30 06:39:06.022740 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-30 06:39:06.022751 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-30 06:39:06.022762 | orchestrator | 2026-01-30 06:39:06.022773 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-01-30 06:39:06.022784 | orchestrator | Friday 30 January 2026 06:38:36 +0000 (0:00:01.561) 0:50:30.249 ******** 2026-01-30 06:39:06.022795 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-01-30 06:39:06.022806 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-01-30 06:39:06.022817 | orchestrator | 2026-01-30 06:39:06.022827 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-01-30 06:39:06.022838 | orchestrator | Friday 30 January 2026 06:38:38 +0000 (0:00:01.493) 0:50:31.742 ******** 2026-01-30 06:39:06.022849 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:39:06.022860 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:39:06.022871 | orchestrator | 2026-01-30 06:39:06.022888 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-01-30 06:39:06.022899 | orchestrator | Friday 30 January 2026 06:38:46 +0000 (0:00:08.175) 0:50:39.918 ******** 2026-01-30 06:39:06.022910 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:39:06.022920 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:39:06.022931 | orchestrator | 2026-01-30 06:39:06.022942 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-01-30 06:39:06.022953 | orchestrator | Friday 30 January 2026 06:38:50 +0000 (0:00:04.287) 0:50:44.206 ******** 2026-01-30 06:39:06.022963 | orchestrator | ok: [testbed-node-0] 2026-01-30 06:39:06.022974 | orchestrator | 2026-01-30 06:39:06.022985 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-01-30 06:39:06.022996 | orchestrator | Friday 30 January 2026 06:38:52 +0000 (0:00:02.172) 0:50:46.378 ******** 2026-01-30 06:39:06.023006 | orchestrator | changed: [testbed-node-0] 2026-01-30 06:39:06.023017 | orchestrator | 2026-01-30 06:39:06.023028 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-01-30 06:39:06.023039 | orchestrator | 2026-01-30 06:39:06.023050 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:39:06.023060 | orchestrator | Friday 30 January 2026 06:38:55 +0000 (0:00:02.329) 0:50:48.707 ******** 2026-01-30 06:39:06.023071 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-01-30 06:39:06.023082 | orchestrator | 2026-01-30 06:39:06.023092 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:39:06.023103 | orchestrator | Friday 30 January 2026 06:38:56 +0000 (0:00:01.144) 0:50:49.852 ******** 2026-01-30 06:39:06.023114 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023125 | orchestrator | 2026-01-30 06:39:06.023135 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:39:06.023146 | orchestrator | Friday 30 January 2026 06:38:57 +0000 (0:00:01.445) 0:50:51.298 ******** 2026-01-30 06:39:06.023157 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023168 | orchestrator | 2026-01-30 06:39:06.023178 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:39:06.023189 | orchestrator | Friday 30 January 2026 06:38:58 +0000 (0:00:01.111) 0:50:52.409 ******** 2026-01-30 06:39:06.023200 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023217 | orchestrator | 2026-01-30 06:39:06.023228 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:39:06.023239 | orchestrator | Friday 30 January 2026 06:39:00 +0000 (0:00:01.468) 0:50:53.878 ******** 2026-01-30 06:39:06.023250 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023261 | orchestrator | 2026-01-30 06:39:06.023272 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:39:06.023282 | orchestrator | Friday 30 January 2026 06:39:01 +0000 (0:00:01.119) 0:50:54.998 ******** 2026-01-30 06:39:06.023293 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023304 | orchestrator | 2026-01-30 06:39:06.023315 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:39:06.023326 | orchestrator | Friday 30 January 2026 06:39:02 +0000 (0:00:01.123) 0:50:56.122 ******** 2026-01-30 06:39:06.023336 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023347 | orchestrator | 2026-01-30 06:39:06.023358 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:39:06.023369 | orchestrator | Friday 30 January 2026 06:39:03 +0000 (0:00:01.143) 0:50:57.266 ******** 2026-01-30 06:39:06.023380 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:06.023390 | orchestrator | 2026-01-30 06:39:06.023401 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:39:06.023412 | orchestrator | Friday 30 January 2026 06:39:04 +0000 (0:00:01.220) 0:50:58.486 ******** 2026-01-30 06:39:06.023423 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:06.023434 | orchestrator | 2026-01-30 06:39:06.023452 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:39:31.391807 | orchestrator | Friday 30 January 2026 06:39:06 +0000 (0:00:01.131) 0:50:59.618 ******** 2026-01-30 06:39:31.391927 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:39:31.391944 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:39:31.391956 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:39:31.391968 | orchestrator | 2026-01-30 06:39:31.391980 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:39:31.391991 | orchestrator | Friday 30 January 2026 06:39:08 +0000 (0:00:02.008) 0:51:01.626 ******** 2026-01-30 06:39:31.392002 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:31.392014 | orchestrator | 2026-01-30 06:39:31.392026 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:39:31.392037 | orchestrator | Friday 30 January 2026 06:39:09 +0000 (0:00:01.224) 0:51:02.850 ******** 2026-01-30 06:39:31.392047 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:39:31.392058 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:39:31.392069 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:39:31.392080 | orchestrator | 2026-01-30 06:39:31.392091 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:39:31.392102 | orchestrator | Friday 30 January 2026 06:39:12 +0000 (0:00:03.363) 0:51:06.215 ******** 2026-01-30 06:39:31.392113 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:39:31.392124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:39:31.392135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:39:31.392146 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392157 | orchestrator | 2026-01-30 06:39:31.392184 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:39:31.392196 | orchestrator | Friday 30 January 2026 06:39:14 +0000 (0:00:01.841) 0:51:08.056 ******** 2026-01-30 06:39:31.392208 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392268 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392279 | orchestrator | 2026-01-30 06:39:31.392291 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:39:31.392301 | orchestrator | Friday 30 January 2026 06:39:16 +0000 (0:00:01.664) 0:51:09.721 ******** 2026-01-30 06:39:31.392314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392343 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:31.392356 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392369 | orchestrator | 2026-01-30 06:39:31.392381 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:39:31.392394 | orchestrator | Friday 30 January 2026 06:39:17 +0000 (0:00:01.147) 0:51:10.868 ******** 2026-01-30 06:39:31.392427 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:39:10.196049', 'end': '2026-01-30 06:39:10.240734', 'delta': '0:00:00.044685', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:39:31.392445 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:39:10.813974', 'end': '2026-01-30 06:39:10.890253', 'delta': '0:00:00.076279', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:39:31.392473 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:39:11.408131', 'end': '2026-01-30 06:39:11.451164', 'delta': '0:00:00.043033', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:39:31.392486 | orchestrator | 2026-01-30 06:39:31.392499 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:39:31.392512 | orchestrator | Friday 30 January 2026 06:39:18 +0000 (0:00:01.217) 0:51:12.086 ******** 2026-01-30 06:39:31.392524 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:31.392561 | orchestrator | 2026-01-30 06:39:31.392574 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:39:31.392587 | orchestrator | Friday 30 January 2026 06:39:19 +0000 (0:00:01.281) 0:51:13.368 ******** 2026-01-30 06:39:31.392600 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392612 | orchestrator | 2026-01-30 06:39:31.392624 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:39:31.392637 | orchestrator | Friday 30 January 2026 06:39:21 +0000 (0:00:01.254) 0:51:14.623 ******** 2026-01-30 06:39:31.392649 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:31.392662 | orchestrator | 2026-01-30 06:39:31.392674 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:39:31.392687 | orchestrator | Friday 30 January 2026 06:39:22 +0000 (0:00:01.129) 0:51:15.753 ******** 2026-01-30 06:39:31.392699 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:39:31.392711 | orchestrator | 2026-01-30 06:39:31.392721 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:39:31.392732 | orchestrator | Friday 30 January 2026 06:39:24 +0000 (0:00:02.041) 0:51:17.795 ******** 2026-01-30 06:39:31.392743 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:31.392754 | orchestrator | 2026-01-30 06:39:31.392764 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:39:31.392775 | orchestrator | Friday 30 January 2026 06:39:25 +0000 (0:00:01.205) 0:51:19.000 ******** 2026-01-30 06:39:31.392786 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392796 | orchestrator | 2026-01-30 06:39:31.392807 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:39:31.392818 | orchestrator | Friday 30 January 2026 06:39:26 +0000 (0:00:01.153) 0:51:20.154 ******** 2026-01-30 06:39:31.392829 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392840 | orchestrator | 2026-01-30 06:39:31.392850 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:39:31.392861 | orchestrator | Friday 30 January 2026 06:39:27 +0000 (0:00:01.207) 0:51:21.361 ******** 2026-01-30 06:39:31.392872 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392882 | orchestrator | 2026-01-30 06:39:31.392893 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:39:31.392904 | orchestrator | Friday 30 January 2026 06:39:28 +0000 (0:00:01.163) 0:51:22.525 ******** 2026-01-30 06:39:31.392914 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:31.392925 | orchestrator | 2026-01-30 06:39:31.392936 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:39:31.392947 | orchestrator | Friday 30 January 2026 06:39:30 +0000 (0:00:01.129) 0:51:23.654 ******** 2026-01-30 06:39:31.392964 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:36.162804 | orchestrator | 2026-01-30 06:39:36.162913 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:39:36.162925 | orchestrator | Friday 30 January 2026 06:39:31 +0000 (0:00:01.335) 0:51:24.990 ******** 2026-01-30 06:39:36.162932 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:36.162940 | orchestrator | 2026-01-30 06:39:36.162946 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:39:36.162953 | orchestrator | Friday 30 January 2026 06:39:32 +0000 (0:00:01.102) 0:51:26.093 ******** 2026-01-30 06:39:36.162959 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:36.162967 | orchestrator | 2026-01-30 06:39:36.162973 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:39:36.162979 | orchestrator | Friday 30 January 2026 06:39:33 +0000 (0:00:01.155) 0:51:27.248 ******** 2026-01-30 06:39:36.162985 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:36.162991 | orchestrator | 2026-01-30 06:39:36.162998 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:39:36.163005 | orchestrator | Friday 30 January 2026 06:39:34 +0000 (0:00:01.122) 0:51:28.370 ******** 2026-01-30 06:39:36.163011 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:39:36.163017 | orchestrator | 2026-01-30 06:39:36.163023 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:39:36.163029 | orchestrator | Friday 30 January 2026 06:39:35 +0000 (0:00:01.149) 0:51:29.520 ******** 2026-01-30 06:39:36.163037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}})  2026-01-30 06:39:36.163070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:39:36.163078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}})  2026-01-30 06:39:36.163102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:39:36.163138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:36.163162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}})  2026-01-30 06:39:36.163169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}})  2026-01-30 06:39:36.163186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:37.521941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:39:37.522083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:37.522098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:39:37.522123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:39:37.522133 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:39:37.522142 | orchestrator | 2026-01-30 06:39:37.522150 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:39:37.522159 | orchestrator | Friday 30 January 2026 06:39:37 +0000 (0:00:01.376) 0:51:30.897 ******** 2026-01-30 06:39:37.522182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:37.522192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:37.522207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:37.522217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:37.522235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:37.522256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.738832 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.738955 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.738981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739005 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739288 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:39:38.739318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:40:13.464916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:40:13.465006 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465016 | orchestrator | 2026-01-30 06:40:13.465022 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:40:13.465029 | orchestrator | Friday 30 January 2026 06:39:38 +0000 (0:00:01.440) 0:51:32.337 ******** 2026-01-30 06:40:13.465034 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465054 | orchestrator | 2026-01-30 06:40:13.465059 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:40:13.465064 | orchestrator | Friday 30 January 2026 06:39:40 +0000 (0:00:01.521) 0:51:33.859 ******** 2026-01-30 06:40:13.465069 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465073 | orchestrator | 2026-01-30 06:40:13.465078 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:40:13.465100 | orchestrator | Friday 30 January 2026 06:39:41 +0000 (0:00:01.124) 0:51:34.983 ******** 2026-01-30 06:40:13.465104 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465109 | orchestrator | 2026-01-30 06:40:13.465113 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:40:13.465118 | orchestrator | Friday 30 January 2026 06:39:42 +0000 (0:00:01.516) 0:51:36.500 ******** 2026-01-30 06:40:13.465122 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465127 | orchestrator | 2026-01-30 06:40:13.465131 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:40:13.465136 | orchestrator | Friday 30 January 2026 06:39:44 +0000 (0:00:01.147) 0:51:37.648 ******** 2026-01-30 06:40:13.465140 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465145 | orchestrator | 2026-01-30 06:40:13.465149 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:40:13.465154 | orchestrator | Friday 30 January 2026 06:39:45 +0000 (0:00:01.230) 0:51:38.878 ******** 2026-01-30 06:40:13.465158 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465163 | orchestrator | 2026-01-30 06:40:13.465167 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:40:13.465172 | orchestrator | Friday 30 January 2026 06:39:46 +0000 (0:00:01.174) 0:51:40.052 ******** 2026-01-30 06:40:13.465177 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 06:40:13.465182 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 06:40:13.465186 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 06:40:13.465191 | orchestrator | 2026-01-30 06:40:13.465196 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:40:13.465200 | orchestrator | Friday 30 January 2026 06:39:48 +0000 (0:00:02.084) 0:51:42.137 ******** 2026-01-30 06:40:13.465205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:40:13.465210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:40:13.465214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:40:13.465218 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465223 | orchestrator | 2026-01-30 06:40:13.465227 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:40:13.465232 | orchestrator | Friday 30 January 2026 06:39:49 +0000 (0:00:01.160) 0:51:43.297 ******** 2026-01-30 06:40:13.465237 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-01-30 06:40:13.465242 | orchestrator | 2026-01-30 06:40:13.465248 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:40:13.465254 | orchestrator | Friday 30 January 2026 06:39:50 +0000 (0:00:01.138) 0:51:44.436 ******** 2026-01-30 06:40:13.465258 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465263 | orchestrator | 2026-01-30 06:40:13.465267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:40:13.465272 | orchestrator | Friday 30 January 2026 06:39:51 +0000 (0:00:01.124) 0:51:45.560 ******** 2026-01-30 06:40:13.465277 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465281 | orchestrator | 2026-01-30 06:40:13.465286 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:40:13.465290 | orchestrator | Friday 30 January 2026 06:39:53 +0000 (0:00:01.146) 0:51:46.707 ******** 2026-01-30 06:40:13.465295 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465299 | orchestrator | 2026-01-30 06:40:13.465304 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:40:13.465308 | orchestrator | Friday 30 January 2026 06:39:54 +0000 (0:00:01.125) 0:51:47.833 ******** 2026-01-30 06:40:13.465313 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465317 | orchestrator | 2026-01-30 06:40:13.465322 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:40:13.465331 | orchestrator | Friday 30 January 2026 06:39:55 +0000 (0:00:01.311) 0:51:49.144 ******** 2026-01-30 06:40:13.465336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:40:13.465351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:40:13.465356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:40:13.465361 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465365 | orchestrator | 2026-01-30 06:40:13.465370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:40:13.465374 | orchestrator | Friday 30 January 2026 06:39:56 +0000 (0:00:01.397) 0:51:50.541 ******** 2026-01-30 06:40:13.465379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:40:13.465383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:40:13.465388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:40:13.465392 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465397 | orchestrator | 2026-01-30 06:40:13.465405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:40:13.465410 | orchestrator | Friday 30 January 2026 06:39:58 +0000 (0:00:01.435) 0:51:51.977 ******** 2026-01-30 06:40:13.465415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:40:13.465419 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:40:13.465423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:40:13.465428 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465432 | orchestrator | 2026-01-30 06:40:13.465437 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:40:13.465441 | orchestrator | Friday 30 January 2026 06:39:59 +0000 (0:00:01.445) 0:51:53.423 ******** 2026-01-30 06:40:13.465446 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465450 | orchestrator | 2026-01-30 06:40:13.465455 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:40:13.465459 | orchestrator | Friday 30 January 2026 06:40:01 +0000 (0:00:01.195) 0:51:54.619 ******** 2026-01-30 06:40:13.465464 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:40:13.465502 | orchestrator | 2026-01-30 06:40:13.465508 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:40:13.465512 | orchestrator | Friday 30 January 2026 06:40:02 +0000 (0:00:01.735) 0:51:56.355 ******** 2026-01-30 06:40:13.465517 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:40:13.465521 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:40:13.465526 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:40:13.465530 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:40:13.465535 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:40:13.465539 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:40:13.465544 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:40:13.465548 | orchestrator | 2026-01-30 06:40:13.465553 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:40:13.465557 | orchestrator | Friday 30 January 2026 06:40:04 +0000 (0:00:02.145) 0:51:58.500 ******** 2026-01-30 06:40:13.465562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:40:13.465566 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:40:13.465571 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:40:13.465576 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:40:13.465580 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:40:13.465592 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:40:13.465596 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:40:13.465601 | orchestrator | 2026-01-30 06:40:13.465605 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-01-30 06:40:13.465610 | orchestrator | Friday 30 January 2026 06:40:07 +0000 (0:00:02.613) 0:52:01.114 ******** 2026-01-30 06:40:13.465614 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465619 | orchestrator | 2026-01-30 06:40:13.465624 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:40:13.465628 | orchestrator | Friday 30 January 2026 06:40:08 +0000 (0:00:01.123) 0:52:02.237 ******** 2026-01-30 06:40:13.465633 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-01-30 06:40:13.465637 | orchestrator | 2026-01-30 06:40:13.465642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:40:13.465646 | orchestrator | Friday 30 January 2026 06:40:09 +0000 (0:00:01.098) 0:52:03.335 ******** 2026-01-30 06:40:13.465650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-01-30 06:40:13.465655 | orchestrator | 2026-01-30 06:40:13.465660 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:40:13.465664 | orchestrator | Friday 30 January 2026 06:40:10 +0000 (0:00:01.141) 0:52:04.477 ******** 2026-01-30 06:40:13.465669 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:40:13.465674 | orchestrator | 2026-01-30 06:40:13.465682 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:40:13.465690 | orchestrator | Friday 30 January 2026 06:40:11 +0000 (0:00:01.097) 0:52:05.574 ******** 2026-01-30 06:40:13.465697 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:40:13.465705 | orchestrator | 2026-01-30 06:40:13.465712 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:40:13.465725 | orchestrator | Friday 30 January 2026 06:40:13 +0000 (0:00:01.487) 0:52:07.062 ******** 2026-01-30 06:41:03.491781 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.491888 | orchestrator | 2026-01-30 06:41:03.491903 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:41:03.491916 | orchestrator | Friday 30 January 2026 06:40:15 +0000 (0:00:01.581) 0:52:08.644 ******** 2026-01-30 06:41:03.491926 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.491936 | orchestrator | 2026-01-30 06:41:03.491946 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:41:03.491956 | orchestrator | Friday 30 January 2026 06:40:16 +0000 (0:00:01.552) 0:52:10.196 ******** 2026-01-30 06:41:03.491966 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.491977 | orchestrator | 2026-01-30 06:41:03.491987 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:41:03.492012 | orchestrator | Friday 30 January 2026 06:40:17 +0000 (0:00:01.130) 0:52:11.326 ******** 2026-01-30 06:41:03.492022 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492033 | orchestrator | 2026-01-30 06:41:03.492043 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:41:03.492053 | orchestrator | Friday 30 January 2026 06:40:18 +0000 (0:00:01.204) 0:52:12.531 ******** 2026-01-30 06:41:03.492062 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492072 | orchestrator | 2026-01-30 06:41:03.492081 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:41:03.492091 | orchestrator | Friday 30 January 2026 06:40:20 +0000 (0:00:01.217) 0:52:13.748 ******** 2026-01-30 06:41:03.492101 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492110 | orchestrator | 2026-01-30 06:41:03.492120 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:41:03.492129 | orchestrator | Friday 30 January 2026 06:40:21 +0000 (0:00:01.539) 0:52:15.288 ******** 2026-01-30 06:41:03.492159 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492170 | orchestrator | 2026-01-30 06:41:03.492179 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:41:03.492189 | orchestrator | Friday 30 January 2026 06:40:23 +0000 (0:00:01.527) 0:52:16.816 ******** 2026-01-30 06:41:03.492198 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492208 | orchestrator | 2026-01-30 06:41:03.492217 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:41:03.492226 | orchestrator | Friday 30 January 2026 06:40:24 +0000 (0:00:01.176) 0:52:17.992 ******** 2026-01-30 06:41:03.492236 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492245 | orchestrator | 2026-01-30 06:41:03.492255 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:41:03.492264 | orchestrator | Friday 30 January 2026 06:40:25 +0000 (0:00:01.138) 0:52:19.131 ******** 2026-01-30 06:41:03.492274 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492284 | orchestrator | 2026-01-30 06:41:03.492334 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:41:03.492347 | orchestrator | Friday 30 January 2026 06:40:26 +0000 (0:00:01.135) 0:52:20.267 ******** 2026-01-30 06:41:03.492360 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492371 | orchestrator | 2026-01-30 06:41:03.492383 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:41:03.492394 | orchestrator | Friday 30 January 2026 06:40:27 +0000 (0:00:01.127) 0:52:21.395 ******** 2026-01-30 06:41:03.492405 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492416 | orchestrator | 2026-01-30 06:41:03.492427 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:41:03.492439 | orchestrator | Friday 30 January 2026 06:40:28 +0000 (0:00:01.163) 0:52:22.559 ******** 2026-01-30 06:41:03.492450 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492461 | orchestrator | 2026-01-30 06:41:03.492472 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:41:03.492484 | orchestrator | Friday 30 January 2026 06:40:30 +0000 (0:00:01.108) 0:52:23.668 ******** 2026-01-30 06:41:03.492495 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492506 | orchestrator | 2026-01-30 06:41:03.492515 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:41:03.492525 | orchestrator | Friday 30 January 2026 06:40:31 +0000 (0:00:01.111) 0:52:24.779 ******** 2026-01-30 06:41:03.492534 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492544 | orchestrator | 2026-01-30 06:41:03.492553 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:41:03.492563 | orchestrator | Friday 30 January 2026 06:40:32 +0000 (0:00:01.100) 0:52:25.880 ******** 2026-01-30 06:41:03.492572 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492582 | orchestrator | 2026-01-30 06:41:03.492591 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:41:03.492601 | orchestrator | Friday 30 January 2026 06:40:33 +0000 (0:00:01.125) 0:52:27.005 ******** 2026-01-30 06:41:03.492610 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.492620 | orchestrator | 2026-01-30 06:41:03.492629 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:41:03.492639 | orchestrator | Friday 30 January 2026 06:40:34 +0000 (0:00:01.245) 0:52:28.251 ******** 2026-01-30 06:41:03.492648 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492658 | orchestrator | 2026-01-30 06:41:03.492667 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:41:03.492677 | orchestrator | Friday 30 January 2026 06:40:35 +0000 (0:00:00.910) 0:52:29.162 ******** 2026-01-30 06:41:03.492686 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492702 | orchestrator | 2026-01-30 06:41:03.492719 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:41:03.492736 | orchestrator | Friday 30 January 2026 06:40:36 +0000 (0:00:01.077) 0:52:30.240 ******** 2026-01-30 06:41:03.492764 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492780 | orchestrator | 2026-01-30 06:41:03.492796 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:41:03.492814 | orchestrator | Friday 30 January 2026 06:40:37 +0000 (0:00:01.100) 0:52:31.340 ******** 2026-01-30 06:41:03.492831 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492843 | orchestrator | 2026-01-30 06:41:03.492852 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:41:03.492880 | orchestrator | Friday 30 January 2026 06:40:38 +0000 (0:00:01.111) 0:52:32.452 ******** 2026-01-30 06:41:03.492890 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492900 | orchestrator | 2026-01-30 06:41:03.492909 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:41:03.492919 | orchestrator | Friday 30 January 2026 06:40:39 +0000 (0:00:01.087) 0:52:33.539 ******** 2026-01-30 06:41:03.492929 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492939 | orchestrator | 2026-01-30 06:41:03.492950 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:41:03.492961 | orchestrator | Friday 30 January 2026 06:40:41 +0000 (0:00:01.107) 0:52:34.647 ******** 2026-01-30 06:41:03.492971 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.492982 | orchestrator | 2026-01-30 06:41:03.492993 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:41:03.493005 | orchestrator | Friday 30 January 2026 06:40:42 +0000 (0:00:01.120) 0:52:35.768 ******** 2026-01-30 06:41:03.493015 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493027 | orchestrator | 2026-01-30 06:41:03.493037 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:41:03.493048 | orchestrator | Friday 30 January 2026 06:40:43 +0000 (0:00:01.100) 0:52:36.869 ******** 2026-01-30 06:41:03.493059 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493070 | orchestrator | 2026-01-30 06:41:03.493080 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:41:03.493091 | orchestrator | Friday 30 January 2026 06:40:44 +0000 (0:00:01.096) 0:52:37.965 ******** 2026-01-30 06:41:03.493102 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493113 | orchestrator | 2026-01-30 06:41:03.493123 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:41:03.493134 | orchestrator | Friday 30 January 2026 06:40:45 +0000 (0:00:01.136) 0:52:39.101 ******** 2026-01-30 06:41:03.493145 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493155 | orchestrator | 2026-01-30 06:41:03.493166 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:41:03.493177 | orchestrator | Friday 30 January 2026 06:40:46 +0000 (0:00:01.102) 0:52:40.204 ******** 2026-01-30 06:41:03.493187 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493198 | orchestrator | 2026-01-30 06:41:03.493209 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:41:03.493219 | orchestrator | Friday 30 January 2026 06:40:47 +0000 (0:00:01.246) 0:52:41.451 ******** 2026-01-30 06:41:03.493230 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.493241 | orchestrator | 2026-01-30 06:41:03.493384 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:41:03.493408 | orchestrator | Friday 30 January 2026 06:40:49 +0000 (0:00:01.958) 0:52:43.409 ******** 2026-01-30 06:41:03.493419 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.493430 | orchestrator | 2026-01-30 06:41:03.493441 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:41:03.493451 | orchestrator | Friday 30 January 2026 06:40:52 +0000 (0:00:02.257) 0:52:45.666 ******** 2026-01-30 06:41:03.493462 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-01-30 06:41:03.493474 | orchestrator | 2026-01-30 06:41:03.493485 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:41:03.493505 | orchestrator | Friday 30 January 2026 06:40:53 +0000 (0:00:01.115) 0:52:46.782 ******** 2026-01-30 06:41:03.493516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493527 | orchestrator | 2026-01-30 06:41:03.493538 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:41:03.493548 | orchestrator | Friday 30 January 2026 06:40:54 +0000 (0:00:01.133) 0:52:47.915 ******** 2026-01-30 06:41:03.493559 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493570 | orchestrator | 2026-01-30 06:41:03.493581 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:41:03.493591 | orchestrator | Friday 30 January 2026 06:40:55 +0000 (0:00:01.117) 0:52:49.033 ******** 2026-01-30 06:41:03.493602 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:41:03.493613 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:41:03.493624 | orchestrator | 2026-01-30 06:41:03.493635 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:41:03.493645 | orchestrator | Friday 30 January 2026 06:40:57 +0000 (0:00:01.807) 0:52:50.841 ******** 2026-01-30 06:41:03.493656 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:03.493667 | orchestrator | 2026-01-30 06:41:03.493678 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:41:03.493688 | orchestrator | Friday 30 January 2026 06:40:58 +0000 (0:00:01.564) 0:52:52.406 ******** 2026-01-30 06:41:03.493699 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493710 | orchestrator | 2026-01-30 06:41:03.493721 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:41:03.493733 | orchestrator | Friday 30 January 2026 06:40:59 +0000 (0:00:01.149) 0:52:53.556 ******** 2026-01-30 06:41:03.493753 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493771 | orchestrator | 2026-01-30 06:41:03.493791 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:41:03.493811 | orchestrator | Friday 30 January 2026 06:41:01 +0000 (0:00:01.125) 0:52:54.681 ******** 2026-01-30 06:41:03.493829 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:03.493847 | orchestrator | 2026-01-30 06:41:03.493859 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:41:03.493870 | orchestrator | Friday 30 January 2026 06:41:02 +0000 (0:00:01.123) 0:52:55.804 ******** 2026-01-30 06:41:03.493880 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-01-30 06:41:03.493891 | orchestrator | 2026-01-30 06:41:03.493902 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:41:03.493924 | orchestrator | Friday 30 January 2026 06:41:03 +0000 (0:00:01.285) 0:52:57.090 ******** 2026-01-30 06:41:48.967449 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:48.967590 | orchestrator | 2026-01-30 06:41:48.967618 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:41:48.967639 | orchestrator | Friday 30 January 2026 06:41:05 +0000 (0:00:01.702) 0:52:58.793 ******** 2026-01-30 06:41:48.967660 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:41:48.967679 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:41:48.967698 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:41:48.967739 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.967762 | orchestrator | 2026-01-30 06:41:48.967781 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:41:48.967800 | orchestrator | Friday 30 January 2026 06:41:06 +0000 (0:00:01.071) 0:52:59.864 ******** 2026-01-30 06:41:48.967818 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.967837 | orchestrator | 2026-01-30 06:41:48.967854 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:41:48.967872 | orchestrator | Friday 30 January 2026 06:41:07 +0000 (0:00:00.896) 0:53:00.760 ******** 2026-01-30 06:41:48.967924 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.967944 | orchestrator | 2026-01-30 06:41:48.967964 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:41:48.967984 | orchestrator | Friday 30 January 2026 06:41:08 +0000 (0:00:00.992) 0:53:01.753 ******** 2026-01-30 06:41:48.968003 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968026 | orchestrator | 2026-01-30 06:41:48.968045 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:41:48.968064 | orchestrator | Friday 30 January 2026 06:41:09 +0000 (0:00:01.094) 0:53:02.848 ******** 2026-01-30 06:41:48.968083 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968102 | orchestrator | 2026-01-30 06:41:48.968123 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:41:48.968143 | orchestrator | Friday 30 January 2026 06:41:10 +0000 (0:00:00.998) 0:53:03.846 ******** 2026-01-30 06:41:48.968192 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968212 | orchestrator | 2026-01-30 06:41:48.968232 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:41:48.968252 | orchestrator | Friday 30 January 2026 06:41:11 +0000 (0:00:00.906) 0:53:04.752 ******** 2026-01-30 06:41:48.968272 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:48.968293 | orchestrator | 2026-01-30 06:41:48.968314 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:41:48.968335 | orchestrator | Friday 30 January 2026 06:41:13 +0000 (0:00:02.506) 0:53:07.259 ******** 2026-01-30 06:41:48.968355 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:48.968375 | orchestrator | 2026-01-30 06:41:48.968394 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:41:48.968413 | orchestrator | Friday 30 January 2026 06:41:14 +0000 (0:00:01.169) 0:53:08.429 ******** 2026-01-30 06:41:48.968433 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-01-30 06:41:48.968453 | orchestrator | 2026-01-30 06:41:48.968473 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:41:48.968494 | orchestrator | Friday 30 January 2026 06:41:15 +0000 (0:00:01.115) 0:53:09.544 ******** 2026-01-30 06:41:48.968514 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968534 | orchestrator | 2026-01-30 06:41:48.968555 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:41:48.968576 | orchestrator | Friday 30 January 2026 06:41:17 +0000 (0:00:01.142) 0:53:10.687 ******** 2026-01-30 06:41:48.968595 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968614 | orchestrator | 2026-01-30 06:41:48.968633 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:41:48.968654 | orchestrator | Friday 30 January 2026 06:41:18 +0000 (0:00:01.181) 0:53:11.868 ******** 2026-01-30 06:41:48.968674 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968695 | orchestrator | 2026-01-30 06:41:48.968716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:41:48.968736 | orchestrator | Friday 30 January 2026 06:41:19 +0000 (0:00:01.118) 0:53:12.987 ******** 2026-01-30 06:41:48.968757 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968777 | orchestrator | 2026-01-30 06:41:48.968794 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:41:48.968813 | orchestrator | Friday 30 January 2026 06:41:20 +0000 (0:00:01.161) 0:53:14.148 ******** 2026-01-30 06:41:48.968831 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968847 | orchestrator | 2026-01-30 06:41:48.968864 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:41:48.968881 | orchestrator | Friday 30 January 2026 06:41:21 +0000 (0:00:01.138) 0:53:15.287 ******** 2026-01-30 06:41:48.968898 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.968916 | orchestrator | 2026-01-30 06:41:48.968934 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:41:48.968967 | orchestrator | Friday 30 January 2026 06:41:22 +0000 (0:00:01.130) 0:53:16.417 ******** 2026-01-30 06:41:48.968985 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.969004 | orchestrator | 2026-01-30 06:41:48.969021 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:41:48.969038 | orchestrator | Friday 30 January 2026 06:41:23 +0000 (0:00:01.133) 0:53:17.551 ******** 2026-01-30 06:41:48.969056 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.969074 | orchestrator | 2026-01-30 06:41:48.969091 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:41:48.969108 | orchestrator | Friday 30 January 2026 06:41:25 +0000 (0:00:01.127) 0:53:18.678 ******** 2026-01-30 06:41:48.969126 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:41:48.969144 | orchestrator | 2026-01-30 06:41:48.969224 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:41:48.969265 | orchestrator | Friday 30 January 2026 06:41:26 +0000 (0:00:01.143) 0:53:19.822 ******** 2026-01-30 06:41:48.969283 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-01-30 06:41:48.969302 | orchestrator | 2026-01-30 06:41:48.969318 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:41:48.969336 | orchestrator | Friday 30 January 2026 06:41:27 +0000 (0:00:01.119) 0:53:20.941 ******** 2026-01-30 06:41:48.969352 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-01-30 06:41:48.969369 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-30 06:41:48.969386 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-30 06:41:48.969414 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-30 06:41:48.969430 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-30 06:41:48.969447 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-30 06:41:48.969463 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-30 06:41:48.969479 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:41:48.969496 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:41:48.969513 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:41:48.969530 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:41:48.969547 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:41:48.969564 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:41:48.969580 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:41:48.969596 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-01-30 06:41:48.969611 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-01-30 06:41:48.969628 | orchestrator | 2026-01-30 06:41:48.969644 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:41:48.969661 | orchestrator | Friday 30 January 2026 06:41:34 +0000 (0:00:06.834) 0:53:27.776 ******** 2026-01-30 06:41:48.969677 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-01-30 06:41:48.969694 | orchestrator | 2026-01-30 06:41:48.969711 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:41:48.969728 | orchestrator | Friday 30 January 2026 06:41:35 +0000 (0:00:01.230) 0:53:29.007 ******** 2026-01-30 06:41:48.969745 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:41:48.969763 | orchestrator | 2026-01-30 06:41:48.969779 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:41:48.969795 | orchestrator | Friday 30 January 2026 06:41:36 +0000 (0:00:01.500) 0:53:30.508 ******** 2026-01-30 06:41:48.969813 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:41:48.969842 | orchestrator | 2026-01-30 06:41:48.969858 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:41:48.969873 | orchestrator | Friday 30 January 2026 06:41:38 +0000 (0:00:01.997) 0:53:32.505 ******** 2026-01-30 06:41:48.969889 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.969906 | orchestrator | 2026-01-30 06:41:48.969922 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:41:48.969938 | orchestrator | Friday 30 January 2026 06:41:40 +0000 (0:00:01.138) 0:53:33.644 ******** 2026-01-30 06:41:48.969954 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.969972 | orchestrator | 2026-01-30 06:41:48.969987 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:41:48.970002 | orchestrator | Friday 30 January 2026 06:41:41 +0000 (0:00:01.156) 0:53:34.801 ******** 2026-01-30 06:41:48.970096 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970120 | orchestrator | 2026-01-30 06:41:48.970138 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:41:48.970213 | orchestrator | Friday 30 January 2026 06:41:42 +0000 (0:00:01.115) 0:53:35.916 ******** 2026-01-30 06:41:48.970231 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970247 | orchestrator | 2026-01-30 06:41:48.970264 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:41:48.970281 | orchestrator | Friday 30 January 2026 06:41:43 +0000 (0:00:01.112) 0:53:37.029 ******** 2026-01-30 06:41:48.970297 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970313 | orchestrator | 2026-01-30 06:41:48.970330 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:41:48.970347 | orchestrator | Friday 30 January 2026 06:41:44 +0000 (0:00:01.110) 0:53:38.139 ******** 2026-01-30 06:41:48.970363 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970379 | orchestrator | 2026-01-30 06:41:48.970395 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:41:48.970412 | orchestrator | Friday 30 January 2026 06:41:45 +0000 (0:00:01.100) 0:53:39.240 ******** 2026-01-30 06:41:48.970427 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970444 | orchestrator | 2026-01-30 06:41:48.970460 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:41:48.970477 | orchestrator | Friday 30 January 2026 06:41:46 +0000 (0:00:01.123) 0:53:40.364 ******** 2026-01-30 06:41:48.970494 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970510 | orchestrator | 2026-01-30 06:41:48.970527 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:41:48.970544 | orchestrator | Friday 30 January 2026 06:41:47 +0000 (0:00:01.100) 0:53:41.465 ******** 2026-01-30 06:41:48.970561 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:41:48.970576 | orchestrator | 2026-01-30 06:41:48.970607 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:42:45.676699 | orchestrator | Friday 30 January 2026 06:41:48 +0000 (0:00:01.101) 0:53:42.566 ******** 2026-01-30 06:42:45.676814 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.676828 | orchestrator | 2026-01-30 06:42:45.676840 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:42:45.676850 | orchestrator | Friday 30 January 2026 06:41:50 +0000 (0:00:01.108) 0:53:43.675 ******** 2026-01-30 06:42:45.676861 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.676871 | orchestrator | 2026-01-30 06:42:45.676882 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:42:45.676909 | orchestrator | Friday 30 January 2026 06:41:51 +0000 (0:00:01.130) 0:53:44.805 ******** 2026-01-30 06:42:45.676919 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:42:45.676929 | orchestrator | 2026-01-30 06:42:45.676938 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:42:45.676972 | orchestrator | Friday 30 January 2026 06:41:56 +0000 (0:00:04.899) 0:53:49.705 ******** 2026-01-30 06:42:45.677071 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:42:45.677081 | orchestrator | 2026-01-30 06:42:45.677087 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:42:45.677093 | orchestrator | Friday 30 January 2026 06:41:57 +0000 (0:00:01.164) 0:53:50.869 ******** 2026-01-30 06:42:45.677101 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-01-30 06:42:45.677110 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-01-30 06:42:45.677117 | orchestrator | 2026-01-30 06:42:45.677123 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:42:45.677129 | orchestrator | Friday 30 January 2026 06:42:02 +0000 (0:00:05.141) 0:53:56.011 ******** 2026-01-30 06:42:45.677135 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677141 | orchestrator | 2026-01-30 06:42:45.677146 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:42:45.677152 | orchestrator | Friday 30 January 2026 06:42:03 +0000 (0:00:01.131) 0:53:57.142 ******** 2026-01-30 06:42:45.677158 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677163 | orchestrator | 2026-01-30 06:42:45.677169 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:42:45.677175 | orchestrator | Friday 30 January 2026 06:42:04 +0000 (0:00:01.113) 0:53:58.256 ******** 2026-01-30 06:42:45.677181 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677186 | orchestrator | 2026-01-30 06:42:45.677192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:42:45.677198 | orchestrator | Friday 30 January 2026 06:42:05 +0000 (0:00:01.197) 0:53:59.454 ******** 2026-01-30 06:42:45.677204 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677209 | orchestrator | 2026-01-30 06:42:45.677215 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:42:45.677221 | orchestrator | Friday 30 January 2026 06:42:07 +0000 (0:00:01.162) 0:54:00.617 ******** 2026-01-30 06:42:45.677226 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677233 | orchestrator | 2026-01-30 06:42:45.677240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:42:45.677251 | orchestrator | Friday 30 January 2026 06:42:08 +0000 (0:00:01.162) 0:54:01.780 ******** 2026-01-30 06:42:45.677262 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677274 | orchestrator | 2026-01-30 06:42:45.677284 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:42:45.677294 | orchestrator | Friday 30 January 2026 06:42:09 +0000 (0:00:01.291) 0:54:03.072 ******** 2026-01-30 06:42:45.677304 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:42:45.677315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:42:45.677325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:42:45.677335 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677345 | orchestrator | 2026-01-30 06:42:45.677355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:42:45.677366 | orchestrator | Friday 30 January 2026 06:42:10 +0000 (0:00:01.423) 0:54:04.496 ******** 2026-01-30 06:42:45.677386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:42:45.677397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:42:45.677408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:42:45.677418 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677429 | orchestrator | 2026-01-30 06:42:45.677440 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:42:45.677451 | orchestrator | Friday 30 January 2026 06:42:12 +0000 (0:00:01.419) 0:54:05.915 ******** 2026-01-30 06:42:45.677461 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:42:45.677472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:42:45.677481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:42:45.677508 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677520 | orchestrator | 2026-01-30 06:42:45.677531 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:42:45.677542 | orchestrator | Friday 30 January 2026 06:42:14 +0000 (0:00:01.714) 0:54:07.629 ******** 2026-01-30 06:42:45.677552 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677562 | orchestrator | 2026-01-30 06:42:45.677573 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:42:45.677584 | orchestrator | Friday 30 January 2026 06:42:15 +0000 (0:00:01.190) 0:54:08.820 ******** 2026-01-30 06:42:45.677603 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:42:45.677613 | orchestrator | 2026-01-30 06:42:45.677623 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:42:45.677632 | orchestrator | Friday 30 January 2026 06:42:17 +0000 (0:00:01.871) 0:54:10.691 ******** 2026-01-30 06:42:45.677642 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677652 | orchestrator | 2026-01-30 06:42:45.677661 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-30 06:42:45.677671 | orchestrator | Friday 30 January 2026 06:42:18 +0000 (0:00:01.753) 0:54:12.445 ******** 2026-01-30 06:42:45.677680 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677689 | orchestrator | 2026-01-30 06:42:45.677695 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-30 06:42:45.677700 | orchestrator | Friday 30 January 2026 06:42:19 +0000 (0:00:01.144) 0:54:13.590 ******** 2026-01-30 06:42:45.677706 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-01-30 06:42:45.677712 | orchestrator | 2026-01-30 06:42:45.677717 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-30 06:42:45.677723 | orchestrator | Friday 30 January 2026 06:42:21 +0000 (0:00:01.504) 0:54:15.094 ******** 2026-01-30 06:42:45.677729 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 06:42:45.677734 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-30 06:42:45.677740 | orchestrator | 2026-01-30 06:42:45.677745 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-30 06:42:45.677752 | orchestrator | Friday 30 January 2026 06:42:23 +0000 (0:00:01.850) 0:54:16.944 ******** 2026-01-30 06:42:45.677761 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:42:45.677771 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:42:45.677777 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:42:45.677782 | orchestrator | 2026-01-30 06:42:45.677788 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:42:45.677794 | orchestrator | Friday 30 January 2026 06:42:26 +0000 (0:00:03.438) 0:54:20.383 ******** 2026-01-30 06:42:45.677800 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-01-30 06:42:45.677805 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:42:45.677811 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677817 | orchestrator | 2026-01-30 06:42:45.677822 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-30 06:42:45.677835 | orchestrator | Friday 30 January 2026 06:42:28 +0000 (0:00:01.989) 0:54:22.373 ******** 2026-01-30 06:42:45.677840 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677846 | orchestrator | 2026-01-30 06:42:45.677852 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-30 06:42:45.677857 | orchestrator | Friday 30 January 2026 06:42:30 +0000 (0:00:01.526) 0:54:23.899 ******** 2026-01-30 06:42:45.677863 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:42:45.677869 | orchestrator | 2026-01-30 06:42:45.677874 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-30 06:42:45.677880 | orchestrator | Friday 30 January 2026 06:42:31 +0000 (0:00:01.103) 0:54:25.003 ******** 2026-01-30 06:42:45.677886 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-01-30 06:42:45.677892 | orchestrator | 2026-01-30 06:42:45.677898 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-30 06:42:45.677904 | orchestrator | Friday 30 January 2026 06:42:32 +0000 (0:00:01.553) 0:54:26.556 ******** 2026-01-30 06:42:45.677913 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-01-30 06:42:45.677923 | orchestrator | 2026-01-30 06:42:45.677932 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-30 06:42:45.677941 | orchestrator | Friday 30 January 2026 06:42:34 +0000 (0:00:01.442) 0:54:27.999 ******** 2026-01-30 06:42:45.677951 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.677960 | orchestrator | 2026-01-30 06:42:45.677969 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-30 06:42:45.678000 | orchestrator | Friday 30 January 2026 06:42:36 +0000 (0:00:02.043) 0:54:30.042 ******** 2026-01-30 06:42:45.678012 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.678072 | orchestrator | 2026-01-30 06:42:45.678080 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-30 06:42:45.678090 | orchestrator | Friday 30 January 2026 06:42:38 +0000 (0:00:01.938) 0:54:31.981 ******** 2026-01-30 06:42:45.678099 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.678109 | orchestrator | 2026-01-30 06:42:45.678118 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-30 06:42:45.678128 | orchestrator | Friday 30 January 2026 06:42:40 +0000 (0:00:02.243) 0:54:34.224 ******** 2026-01-30 06:42:45.678138 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.678148 | orchestrator | 2026-01-30 06:42:45.678158 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-30 06:42:45.678167 | orchestrator | Friday 30 January 2026 06:42:42 +0000 (0:00:02.312) 0:54:36.537 ******** 2026-01-30 06:42:45.678177 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:42:45.678183 | orchestrator | 2026-01-30 06:42:45.678189 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-01-30 06:42:45.678194 | orchestrator | Friday 30 January 2026 06:42:44 +0000 (0:00:01.623) 0:54:38.161 ******** 2026-01-30 06:42:45.678210 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:43:18.404469 | orchestrator | 2026-01-30 06:43:18.404595 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-01-30 06:43:18.404616 | orchestrator | Friday 30 January 2026 06:42:45 +0000 (0:00:01.114) 0:54:39.275 ******** 2026-01-30 06:43:18.404629 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:43:18.404641 | orchestrator | 2026-01-30 06:43:18.404653 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-01-30 06:43:18.404665 | orchestrator | 2026-01-30 06:43:18.404677 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:43:18.404707 | orchestrator | Friday 30 January 2026 06:42:53 +0000 (0:00:08.155) 0:54:47.431 ******** 2026-01-30 06:43:18.404722 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5, testbed-node-4 2026-01-30 06:43:18.404735 | orchestrator | 2026-01-30 06:43:18.404746 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:43:18.404783 | orchestrator | Friday 30 January 2026 06:42:55 +0000 (0:00:01.521) 0:54:48.953 ******** 2026-01-30 06:43:18.404796 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.404809 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.404821 | orchestrator | 2026-01-30 06:43:18.404833 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:43:18.404845 | orchestrator | Friday 30 January 2026 06:42:56 +0000 (0:00:01.598) 0:54:50.551 ******** 2026-01-30 06:43:18.404856 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.404868 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.404879 | orchestrator | 2026-01-30 06:43:18.404943 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:43:18.404957 | orchestrator | Friday 30 January 2026 06:42:58 +0000 (0:00:01.247) 0:54:51.799 ******** 2026-01-30 06:43:18.404968 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.404980 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.404992 | orchestrator | 2026-01-30 06:43:18.405004 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:43:18.405017 | orchestrator | Friday 30 January 2026 06:42:59 +0000 (0:00:01.610) 0:54:53.410 ******** 2026-01-30 06:43:18.405029 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.405041 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.405054 | orchestrator | 2026-01-30 06:43:18.405066 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:43:18.405079 | orchestrator | Friday 30 January 2026 06:43:01 +0000 (0:00:01.263) 0:54:54.673 ******** 2026-01-30 06:43:18.405092 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.405106 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.405118 | orchestrator | 2026-01-30 06:43:18.405130 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:43:18.405142 | orchestrator | Friday 30 January 2026 06:43:02 +0000 (0:00:01.246) 0:54:55.920 ******** 2026-01-30 06:43:18.405154 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.405166 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.405178 | orchestrator | 2026-01-30 06:43:18.405191 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:43:18.405205 | orchestrator | Friday 30 January 2026 06:43:03 +0000 (0:00:01.235) 0:54:57.156 ******** 2026-01-30 06:43:18.405219 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:18.405233 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:18.405245 | orchestrator | 2026-01-30 06:43:18.405256 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:43:18.405269 | orchestrator | Friday 30 January 2026 06:43:04 +0000 (0:00:01.240) 0:54:58.397 ******** 2026-01-30 06:43:18.405282 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.405295 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.405307 | orchestrator | 2026-01-30 06:43:18.405320 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:43:18.405333 | orchestrator | Friday 30 January 2026 06:43:05 +0000 (0:00:01.207) 0:54:59.604 ******** 2026-01-30 06:43:18.405347 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:43:18.405360 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:43:18.405373 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:43:18.405386 | orchestrator | 2026-01-30 06:43:18.405396 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:43:18.405403 | orchestrator | Friday 30 January 2026 06:43:07 +0000 (0:00:01.679) 0:55:01.284 ******** 2026-01-30 06:43:18.405413 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:18.405425 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:18.405440 | orchestrator | 2026-01-30 06:43:18.405459 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:43:18.405470 | orchestrator | Friday 30 January 2026 06:43:09 +0000 (0:00:01.449) 0:55:02.734 ******** 2026-01-30 06:43:18.405495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:43:18.405506 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:43:18.405518 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:43:18.405529 | orchestrator | 2026-01-30 06:43:18.405539 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:43:18.405550 | orchestrator | Friday 30 January 2026 06:43:12 +0000 (0:00:03.394) 0:55:06.129 ******** 2026-01-30 06:43:18.405562 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:43:18.405572 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:43:18.405584 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:43:18.405596 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:18.405609 | orchestrator | 2026-01-30 06:43:18.405621 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:43:18.405633 | orchestrator | Friday 30 January 2026 06:43:13 +0000 (0:00:01.402) 0:55:07.531 ******** 2026-01-30 06:43:18.405667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405687 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405702 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:18.405710 | orchestrator | 2026-01-30 06:43:18.405717 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:43:18.405724 | orchestrator | Friday 30 January 2026 06:43:15 +0000 (0:00:02.021) 0:55:09.553 ******** 2026-01-30 06:43:18.405733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405744 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405751 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:18.405759 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:18.405766 | orchestrator | 2026-01-30 06:43:18.405773 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:43:18.405780 | orchestrator | Friday 30 January 2026 06:43:17 +0000 (0:00:01.153) 0:55:10.706 ******** 2026-01-30 06:43:18.405790 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:43:09.659974', 'end': '2026-01-30 06:43:09.718845', 'delta': '0:00:00.058871', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:43:18.405808 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:43:10.680022', 'end': '2026-01-30 06:43:10.732100', 'delta': '0:00:00.052078', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:43:18.405826 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:43:11.265392', 'end': '2026-01-30 06:43:11.317504', 'delta': '0:00:00.052112', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:43:37.590507 | orchestrator | 2026-01-30 06:43:37.590595 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:43:37.590605 | orchestrator | Friday 30 January 2026 06:43:18 +0000 (0:00:01.290) 0:55:11.996 ******** 2026-01-30 06:43:37.590610 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.590616 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.590621 | orchestrator | 2026-01-30 06:43:37.590627 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:43:37.590632 | orchestrator | Friday 30 January 2026 06:43:19 +0000 (0:00:01.421) 0:55:13.418 ******** 2026-01-30 06:43:37.590637 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590643 | orchestrator | 2026-01-30 06:43:37.590648 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:43:37.590652 | orchestrator | Friday 30 January 2026 06:43:21 +0000 (0:00:01.234) 0:55:14.653 ******** 2026-01-30 06:43:37.590657 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.590662 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.590667 | orchestrator | 2026-01-30 06:43:37.590672 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:43:37.590676 | orchestrator | Friday 30 January 2026 06:43:22 +0000 (0:00:01.260) 0:55:15.913 ******** 2026-01-30 06:43:37.590681 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:43:37.590687 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:43:37.590691 | orchestrator | 2026-01-30 06:43:37.590696 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:43:37.590701 | orchestrator | Friday 30 January 2026 06:43:24 +0000 (0:00:02.499) 0:55:18.412 ******** 2026-01-30 06:43:37.590706 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.590726 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.590732 | orchestrator | 2026-01-30 06:43:37.590737 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:43:37.590742 | orchestrator | Friday 30 January 2026 06:43:26 +0000 (0:00:01.251) 0:55:19.663 ******** 2026-01-30 06:43:37.590746 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590751 | orchestrator | 2026-01-30 06:43:37.590756 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:43:37.590761 | orchestrator | Friday 30 January 2026 06:43:27 +0000 (0:00:01.150) 0:55:20.814 ******** 2026-01-30 06:43:37.590765 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590770 | orchestrator | 2026-01-30 06:43:37.590775 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:43:37.590779 | orchestrator | Friday 30 January 2026 06:43:28 +0000 (0:00:01.186) 0:55:22.001 ******** 2026-01-30 06:43:37.590784 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590789 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:37.590793 | orchestrator | 2026-01-30 06:43:37.590798 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:43:37.590803 | orchestrator | Friday 30 January 2026 06:43:29 +0000 (0:00:01.264) 0:55:23.265 ******** 2026-01-30 06:43:37.590808 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590812 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:37.590817 | orchestrator | 2026-01-30 06:43:37.590822 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:43:37.590826 | orchestrator | Friday 30 January 2026 06:43:30 +0000 (0:00:01.192) 0:55:24.458 ******** 2026-01-30 06:43:37.590831 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.590870 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.590876 | orchestrator | 2026-01-30 06:43:37.590881 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:43:37.590886 | orchestrator | Friday 30 January 2026 06:43:32 +0000 (0:00:01.244) 0:55:25.702 ******** 2026-01-30 06:43:37.590891 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590896 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:37.590901 | orchestrator | 2026-01-30 06:43:37.590906 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:43:37.590910 | orchestrator | Friday 30 January 2026 06:43:33 +0000 (0:00:01.229) 0:55:26.932 ******** 2026-01-30 06:43:37.590915 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.590921 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.590926 | orchestrator | 2026-01-30 06:43:37.590930 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:43:37.590935 | orchestrator | Friday 30 January 2026 06:43:34 +0000 (0:00:01.255) 0:55:28.188 ******** 2026-01-30 06:43:37.590986 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.590992 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:37.590997 | orchestrator | 2026-01-30 06:43:37.591002 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:43:37.591007 | orchestrator | Friday 30 January 2026 06:43:35 +0000 (0:00:01.234) 0:55:29.423 ******** 2026-01-30 06:43:37.591012 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:43:37.591017 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:43:37.591022 | orchestrator | 2026-01-30 06:43:37.591027 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:43:37.591032 | orchestrator | Friday 30 January 2026 06:43:37 +0000 (0:00:01.253) 0:55:30.677 ******** 2026-01-30 06:43:37.591038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.591087 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}})  2026-01-30 06:43:37.591103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:43:37.591112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}})  2026-01-30 06:43:37.591118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.591125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.591132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:43:37.591138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.591156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}})  2026-01-30 06:43:37.846284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}})  2026-01-30 06:43:37.846298 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846351 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:43:37.846391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846429 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:37.846442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:37.846454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}})  2026-01-30 06:43:37.846467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:43:37.846502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}})  2026-01-30 06:43:39.026354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:43:39.026485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}})  2026-01-30 06:43:39.026561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}})  2026-01-30 06:43:39.026569 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:43:39.026592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:43:39.026615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:43:39.247784 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:43:39.247969 | orchestrator | 2026-01-30 06:43:39.247991 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:43:39.248006 | orchestrator | Friday 30 January 2026 06:43:39 +0000 (0:00:01.950) 0:55:32.627 ******** 2026-01-30 06:43:39.248026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248162 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248191 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.248266 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298737 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298892 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.298914 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412605 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412622 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:43:39.412630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412649 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412662 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412679 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412691 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:43:39.412706 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:44:08.470326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:44:08.470450 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:44:08.470489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:44:08.470505 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.470517 | orchestrator | 2026-01-30 06:44:08.470528 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:44:08.470539 | orchestrator | Friday 30 January 2026 06:43:40 +0000 (0:00:01.503) 0:55:34.131 ******** 2026-01-30 06:44:08.470549 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:08.470559 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:08.470569 | orchestrator | 2026-01-30 06:44:08.470581 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:44:08.470598 | orchestrator | Friday 30 January 2026 06:43:42 +0000 (0:00:01.601) 0:55:35.732 ******** 2026-01-30 06:44:08.470614 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:08.470629 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:08.470645 | orchestrator | 2026-01-30 06:44:08.470661 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:44:08.470678 | orchestrator | Friday 30 January 2026 06:43:43 +0000 (0:00:01.225) 0:55:36.958 ******** 2026-01-30 06:44:08.470717 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:08.470733 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:08.470750 | orchestrator | 2026-01-30 06:44:08.470825 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:44:08.470844 | orchestrator | Friday 30 January 2026 06:43:44 +0000 (0:00:01.585) 0:55:38.544 ******** 2026-01-30 06:44:08.470861 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.470878 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.470895 | orchestrator | 2026-01-30 06:44:08.470912 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:44:08.470929 | orchestrator | Friday 30 January 2026 06:43:46 +0000 (0:00:01.223) 0:55:39.768 ******** 2026-01-30 06:44:08.470946 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.470963 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.470979 | orchestrator | 2026-01-30 06:44:08.470996 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:44:08.471012 | orchestrator | Friday 30 January 2026 06:43:47 +0000 (0:00:01.709) 0:55:41.477 ******** 2026-01-30 06:44:08.471028 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.471045 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.471062 | orchestrator | 2026-01-30 06:44:08.471079 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:44:08.471096 | orchestrator | Friday 30 January 2026 06:43:49 +0000 (0:00:01.321) 0:55:42.799 ******** 2026-01-30 06:44:08.471113 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 06:44:08.471130 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 06:44:08.471146 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 06:44:08.471162 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 06:44:08.471179 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 06:44:08.471195 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 06:44:08.471213 | orchestrator | 2026-01-30 06:44:08.471229 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:44:08.471246 | orchestrator | Friday 30 January 2026 06:43:50 +0000 (0:00:01.769) 0:55:44.568 ******** 2026-01-30 06:44:08.471288 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:44:08.471306 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:44:08.471324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:44:08.471340 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.471357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 06:44:08.471375 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 06:44:08.471391 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 06:44:08.471407 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.471423 | orchestrator | 2026-01-30 06:44:08.471440 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:44:08.471455 | orchestrator | Friday 30 January 2026 06:43:52 +0000 (0:00:01.230) 0:55:45.799 ******** 2026-01-30 06:44:08.471472 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5, testbed-node-4 2026-01-30 06:44:08.471489 | orchestrator | 2026-01-30 06:44:08.471506 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:44:08.471524 | orchestrator | Friday 30 January 2026 06:43:53 +0000 (0:00:01.241) 0:55:47.041 ******** 2026-01-30 06:44:08.471540 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.471556 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.471572 | orchestrator | 2026-01-30 06:44:08.471588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:44:08.471606 | orchestrator | Friday 30 January 2026 06:43:54 +0000 (0:00:01.219) 0:55:48.260 ******** 2026-01-30 06:44:08.471622 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.471638 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.471654 | orchestrator | 2026-01-30 06:44:08.471670 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:44:08.471711 | orchestrator | Friday 30 January 2026 06:43:56 +0000 (0:00:01.595) 0:55:49.856 ******** 2026-01-30 06:44:08.471729 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.471746 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:08.471786 | orchestrator | 2026-01-30 06:44:08.471804 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:44:08.471823 | orchestrator | Friday 30 January 2026 06:43:57 +0000 (0:00:01.275) 0:55:51.131 ******** 2026-01-30 06:44:08.471841 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:08.471858 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:08.471876 | orchestrator | 2026-01-30 06:44:08.471894 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:44:08.471911 | orchestrator | Friday 30 January 2026 06:43:58 +0000 (0:00:01.369) 0:55:52.501 ******** 2026-01-30 06:44:08.471930 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:44:08.471948 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:44:08.471966 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:44:08.471984 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.472001 | orchestrator | 2026-01-30 06:44:08.472019 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:44:08.472038 | orchestrator | Friday 30 January 2026 06:44:00 +0000 (0:00:01.466) 0:55:53.967 ******** 2026-01-30 06:44:08.472056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:44:08.472073 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:44:08.472090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:44:08.472108 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.472125 | orchestrator | 2026-01-30 06:44:08.472143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:44:08.472161 | orchestrator | Friday 30 January 2026 06:44:01 +0000 (0:00:01.422) 0:55:55.390 ******** 2026-01-30 06:44:08.472178 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:44:08.472196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:44:08.472213 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:44:08.472230 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:08.472248 | orchestrator | 2026-01-30 06:44:08.472266 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:44:08.472283 | orchestrator | Friday 30 January 2026 06:44:03 +0000 (0:00:01.467) 0:55:56.858 ******** 2026-01-30 06:44:08.472302 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:08.472320 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:08.472337 | orchestrator | 2026-01-30 06:44:08.472354 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:44:08.472370 | orchestrator | Friday 30 January 2026 06:44:04 +0000 (0:00:01.225) 0:55:58.084 ******** 2026-01-30 06:44:08.472386 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 06:44:08.472401 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:44:08.472461 | orchestrator | 2026-01-30 06:44:08.472478 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:44:08.472491 | orchestrator | Friday 30 January 2026 06:44:06 +0000 (0:00:01.821) 0:55:59.905 ******** 2026-01-30 06:44:08.472506 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:44:08.472522 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:44:08.472537 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:44:08.472553 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:44:08.472569 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:44:08.472599 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:44:08.472630 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:44:53.351244 | orchestrator | 2026-01-30 06:44:53.351395 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:44:53.351425 | orchestrator | Friday 30 January 2026 06:44:08 +0000 (0:00:02.153) 0:56:02.059 ******** 2026-01-30 06:44:53.351444 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:44:53.351462 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:44:53.351481 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:44:53.351500 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:44:53.351519 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:44:53.351538 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:44:53.351558 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:44:53.351577 | orchestrator | 2026-01-30 06:44:53.351597 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-01-30 06:44:53.351617 | orchestrator | Friday 30 January 2026 06:44:11 +0000 (0:00:02.588) 0:56:04.647 ******** 2026-01-30 06:44:53.351636 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.351688 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.351701 | orchestrator | 2026-01-30 06:44:53.351713 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:44:53.351724 | orchestrator | Friday 30 January 2026 06:44:12 +0000 (0:00:01.220) 0:56:05.867 ******** 2026-01-30 06:44:53.351736 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5, testbed-node-4 2026-01-30 06:44:53.351748 | orchestrator | 2026-01-30 06:44:53.351776 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:44:53.351789 | orchestrator | Friday 30 January 2026 06:44:13 +0000 (0:00:01.250) 0:56:07.118 ******** 2026-01-30 06:44:53.351802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5, testbed-node-4 2026-01-30 06:44:53.351815 | orchestrator | 2026-01-30 06:44:53.351827 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:44:53.351840 | orchestrator | Friday 30 January 2026 06:44:14 +0000 (0:00:01.312) 0:56:08.430 ******** 2026-01-30 06:44:53.351852 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.351864 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.351877 | orchestrator | 2026-01-30 06:44:53.351889 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:44:53.351901 | orchestrator | Friday 30 January 2026 06:44:16 +0000 (0:00:01.556) 0:56:09.987 ******** 2026-01-30 06:44:53.351914 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.351926 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.351938 | orchestrator | 2026-01-30 06:44:53.351952 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:44:53.351964 | orchestrator | Friday 30 January 2026 06:44:18 +0000 (0:00:01.689) 0:56:11.676 ******** 2026-01-30 06:44:53.351976 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.351988 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352000 | orchestrator | 2026-01-30 06:44:53.352012 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:44:53.352024 | orchestrator | Friday 30 January 2026 06:44:19 +0000 (0:00:01.655) 0:56:13.332 ******** 2026-01-30 06:44:53.352036 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352048 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352061 | orchestrator | 2026-01-30 06:44:53.352074 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:44:53.352087 | orchestrator | Friday 30 January 2026 06:44:21 +0000 (0:00:01.698) 0:56:15.031 ******** 2026-01-30 06:44:53.352122 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352135 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352148 | orchestrator | 2026-01-30 06:44:53.352160 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:44:53.352173 | orchestrator | Friday 30 January 2026 06:44:22 +0000 (0:00:01.234) 0:56:16.266 ******** 2026-01-30 06:44:53.352184 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352195 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352205 | orchestrator | 2026-01-30 06:44:53.352216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:44:53.352227 | orchestrator | Friday 30 January 2026 06:44:23 +0000 (0:00:01.284) 0:56:17.550 ******** 2026-01-30 06:44:53.352237 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352248 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352259 | orchestrator | 2026-01-30 06:44:53.352269 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:44:53.352280 | orchestrator | Friday 30 January 2026 06:44:25 +0000 (0:00:01.369) 0:56:18.920 ******** 2026-01-30 06:44:53.352291 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352301 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352312 | orchestrator | 2026-01-30 06:44:53.352323 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:44:53.352333 | orchestrator | Friday 30 January 2026 06:44:26 +0000 (0:00:01.686) 0:56:20.607 ******** 2026-01-30 06:44:53.352344 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352354 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352365 | orchestrator | 2026-01-30 06:44:53.352376 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:44:53.352386 | orchestrator | Friday 30 January 2026 06:44:28 +0000 (0:00:01.775) 0:56:22.383 ******** 2026-01-30 06:44:53.352401 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352421 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352441 | orchestrator | 2026-01-30 06:44:53.352462 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:44:53.352482 | orchestrator | Friday 30 January 2026 06:44:30 +0000 (0:00:01.254) 0:56:23.637 ******** 2026-01-30 06:44:53.352501 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352545 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352566 | orchestrator | 2026-01-30 06:44:53.352585 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:44:53.352603 | orchestrator | Friday 30 January 2026 06:44:31 +0000 (0:00:01.312) 0:56:24.949 ******** 2026-01-30 06:44:53.352622 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352641 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352685 | orchestrator | 2026-01-30 06:44:53.352703 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:44:53.352721 | orchestrator | Friday 30 January 2026 06:44:32 +0000 (0:00:01.252) 0:56:26.202 ******** 2026-01-30 06:44:53.352738 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352756 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352775 | orchestrator | 2026-01-30 06:44:53.352792 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:44:53.352810 | orchestrator | Friday 30 January 2026 06:44:33 +0000 (0:00:01.297) 0:56:27.499 ******** 2026-01-30 06:44:53.352828 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.352848 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.352864 | orchestrator | 2026-01-30 06:44:53.352883 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:44:53.352900 | orchestrator | Friday 30 January 2026 06:44:35 +0000 (0:00:01.603) 0:56:29.103 ******** 2026-01-30 06:44:53.352917 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.352934 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.352951 | orchestrator | 2026-01-30 06:44:53.352968 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:44:53.353003 | orchestrator | Friday 30 January 2026 06:44:36 +0000 (0:00:01.261) 0:56:30.364 ******** 2026-01-30 06:44:53.353022 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353041 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353061 | orchestrator | 2026-01-30 06:44:53.353074 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:44:53.353094 | orchestrator | Friday 30 January 2026 06:44:38 +0000 (0:00:01.249) 0:56:31.614 ******** 2026-01-30 06:44:53.353105 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353115 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353126 | orchestrator | 2026-01-30 06:44:53.353136 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:44:53.353147 | orchestrator | Friday 30 January 2026 06:44:39 +0000 (0:00:01.277) 0:56:32.892 ******** 2026-01-30 06:44:53.353158 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.353168 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.353179 | orchestrator | 2026-01-30 06:44:53.353190 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:44:53.353200 | orchestrator | Friday 30 January 2026 06:44:40 +0000 (0:00:01.250) 0:56:34.143 ******** 2026-01-30 06:44:53.353211 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:44:53.353222 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:44:53.353232 | orchestrator | 2026-01-30 06:44:53.353243 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:44:53.353254 | orchestrator | Friday 30 January 2026 06:44:41 +0000 (0:00:01.442) 0:56:35.586 ******** 2026-01-30 06:44:53.353264 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353275 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353285 | orchestrator | 2026-01-30 06:44:53.353296 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:44:53.353307 | orchestrator | Friday 30 January 2026 06:44:43 +0000 (0:00:01.242) 0:56:36.828 ******** 2026-01-30 06:44:53.353317 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353328 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353338 | orchestrator | 2026-01-30 06:44:53.353349 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:44:53.353359 | orchestrator | Friday 30 January 2026 06:44:44 +0000 (0:00:01.238) 0:56:38.067 ******** 2026-01-30 06:44:53.353370 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353381 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353391 | orchestrator | 2026-01-30 06:44:53.353421 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:44:53.353432 | orchestrator | Friday 30 January 2026 06:44:45 +0000 (0:00:01.311) 0:56:39.379 ******** 2026-01-30 06:44:53.353456 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353467 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353478 | orchestrator | 2026-01-30 06:44:53.353488 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:44:53.353499 | orchestrator | Friday 30 January 2026 06:44:47 +0000 (0:00:01.239) 0:56:40.619 ******** 2026-01-30 06:44:53.353510 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353521 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353531 | orchestrator | 2026-01-30 06:44:53.353542 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:44:53.353553 | orchestrator | Friday 30 January 2026 06:44:48 +0000 (0:00:01.244) 0:56:41.863 ******** 2026-01-30 06:44:53.353564 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353574 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353585 | orchestrator | 2026-01-30 06:44:53.353595 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:44:53.353606 | orchestrator | Friday 30 January 2026 06:44:49 +0000 (0:00:01.271) 0:56:43.135 ******** 2026-01-30 06:44:53.353617 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353628 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353646 | orchestrator | 2026-01-30 06:44:53.353695 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:44:53.353715 | orchestrator | Friday 30 January 2026 06:44:50 +0000 (0:00:01.323) 0:56:44.459 ******** 2026-01-30 06:44:53.353734 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353751 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353767 | orchestrator | 2026-01-30 06:44:53.353778 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:44:53.353789 | orchestrator | Friday 30 January 2026 06:44:52 +0000 (0:00:01.224) 0:56:45.684 ******** 2026-01-30 06:44:53.353800 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:44:53.353811 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:44:53.353821 | orchestrator | 2026-01-30 06:44:53.353847 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:45:38.499910 | orchestrator | Friday 30 January 2026 06:44:53 +0000 (0:00:01.261) 0:56:46.946 ******** 2026-01-30 06:45:38.500022 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500045 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500061 | orchestrator | 2026-01-30 06:45:38.500074 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:45:38.500086 | orchestrator | Friday 30 January 2026 06:44:54 +0000 (0:00:01.219) 0:56:48.166 ******** 2026-01-30 06:45:38.500099 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500111 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500124 | orchestrator | 2026-01-30 06:45:38.500137 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:45:38.500148 | orchestrator | Friday 30 January 2026 06:44:55 +0000 (0:00:01.256) 0:56:49.422 ******** 2026-01-30 06:45:38.500160 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500167 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500173 | orchestrator | 2026-01-30 06:45:38.500180 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:45:38.500188 | orchestrator | Friday 30 January 2026 06:44:57 +0000 (0:00:01.277) 0:56:50.700 ******** 2026-01-30 06:45:38.500200 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.500212 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.500223 | orchestrator | 2026-01-30 06:45:38.500234 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:45:38.500245 | orchestrator | Friday 30 January 2026 06:44:59 +0000 (0:00:02.510) 0:56:53.211 ******** 2026-01-30 06:45:38.500256 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.500267 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.500276 | orchestrator | 2026-01-30 06:45:38.500287 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:45:38.500300 | orchestrator | Friday 30 January 2026 06:45:02 +0000 (0:00:02.455) 0:56:55.667 ******** 2026-01-30 06:45:38.500329 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5, testbed-node-4 2026-01-30 06:45:38.500341 | orchestrator | 2026-01-30 06:45:38.500353 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:45:38.500364 | orchestrator | Friday 30 January 2026 06:45:03 +0000 (0:00:01.235) 0:56:56.902 ******** 2026-01-30 06:45:38.500374 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500386 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500397 | orchestrator | 2026-01-30 06:45:38.500407 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:45:38.500418 | orchestrator | Friday 30 January 2026 06:45:04 +0000 (0:00:01.224) 0:56:58.127 ******** 2026-01-30 06:45:38.500429 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500441 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500453 | orchestrator | 2026-01-30 06:45:38.500464 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:45:38.500477 | orchestrator | Friday 30 January 2026 06:45:05 +0000 (0:00:01.265) 0:56:59.393 ******** 2026-01-30 06:45:38.500512 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:45:38.500524 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:45:38.500536 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:45:38.500641 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:45:38.500658 | orchestrator | 2026-01-30 06:45:38.500668 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:45:38.500680 | orchestrator | Friday 30 January 2026 06:45:07 +0000 (0:00:02.056) 0:57:01.449 ******** 2026-01-30 06:45:38.500692 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.500704 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.500715 | orchestrator | 2026-01-30 06:45:38.500726 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:45:38.500739 | orchestrator | Friday 30 January 2026 06:45:09 +0000 (0:00:01.642) 0:57:03.092 ******** 2026-01-30 06:45:38.500750 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500762 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500773 | orchestrator | 2026-01-30 06:45:38.500783 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:45:38.500793 | orchestrator | Friday 30 January 2026 06:45:10 +0000 (0:00:01.226) 0:57:04.319 ******** 2026-01-30 06:45:38.500804 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500815 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500825 | orchestrator | 2026-01-30 06:45:38.500836 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:45:38.500847 | orchestrator | Friday 30 January 2026 06:45:11 +0000 (0:00:01.267) 0:57:05.586 ******** 2026-01-30 06:45:38.500859 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.500869 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.500879 | orchestrator | 2026-01-30 06:45:38.500890 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:45:38.500901 | orchestrator | Friday 30 January 2026 06:45:13 +0000 (0:00:01.203) 0:57:06.790 ******** 2026-01-30 06:45:38.500913 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5, testbed-node-4 2026-01-30 06:45:38.500924 | orchestrator | 2026-01-30 06:45:38.500936 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:45:38.500947 | orchestrator | Friday 30 January 2026 06:45:14 +0000 (0:00:01.283) 0:57:08.073 ******** 2026-01-30 06:45:38.500958 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.500968 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.500978 | orchestrator | 2026-01-30 06:45:38.500989 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:45:38.501000 | orchestrator | Friday 30 January 2026 06:45:16 +0000 (0:00:01.981) 0:57:10.055 ******** 2026-01-30 06:45:38.501010 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:45:38.501039 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:45:38.501046 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:45:38.501052 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501058 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:45:38.501064 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:45:38.501071 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:45:38.501077 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501083 | orchestrator | 2026-01-30 06:45:38.501089 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:45:38.501095 | orchestrator | Friday 30 January 2026 06:45:17 +0000 (0:00:01.281) 0:57:11.336 ******** 2026-01-30 06:45:38.501101 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501117 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501123 | orchestrator | 2026-01-30 06:45:38.501129 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:45:38.501135 | orchestrator | Friday 30 January 2026 06:45:19 +0000 (0:00:01.276) 0:57:12.613 ******** 2026-01-30 06:45:38.501141 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501147 | orchestrator | 2026-01-30 06:45:38.501154 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:45:38.501160 | orchestrator | Friday 30 January 2026 06:45:20 +0000 (0:00:01.161) 0:57:13.775 ******** 2026-01-30 06:45:38.501166 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501172 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501178 | orchestrator | 2026-01-30 06:45:38.501184 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:45:38.501197 | orchestrator | Friday 30 January 2026 06:45:21 +0000 (0:00:01.274) 0:57:15.050 ******** 2026-01-30 06:45:38.501203 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501209 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501215 | orchestrator | 2026-01-30 06:45:38.501222 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:45:38.501228 | orchestrator | Friday 30 January 2026 06:45:22 +0000 (0:00:01.260) 0:57:16.310 ******** 2026-01-30 06:45:38.501234 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501240 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501246 | orchestrator | 2026-01-30 06:45:38.501252 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:45:38.501258 | orchestrator | Friday 30 January 2026 06:45:23 +0000 (0:00:01.289) 0:57:17.600 ******** 2026-01-30 06:45:38.501264 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.501270 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.501276 | orchestrator | 2026-01-30 06:45:38.501282 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:45:38.501288 | orchestrator | Friday 30 January 2026 06:45:26 +0000 (0:00:02.727) 0:57:20.328 ******** 2026-01-30 06:45:38.501294 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:45:38.501300 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:45:38.501306 | orchestrator | 2026-01-30 06:45:38.501312 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:45:38.501320 | orchestrator | Friday 30 January 2026 06:45:27 +0000 (0:00:01.260) 0:57:21.589 ******** 2026-01-30 06:45:38.501331 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5, testbed-node-4 2026-01-30 06:45:38.501339 | orchestrator | 2026-01-30 06:45:38.501345 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:45:38.501352 | orchestrator | Friday 30 January 2026 06:45:29 +0000 (0:00:01.282) 0:57:22.871 ******** 2026-01-30 06:45:38.501358 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501364 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501370 | orchestrator | 2026-01-30 06:45:38.501376 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:45:38.501382 | orchestrator | Friday 30 January 2026 06:45:30 +0000 (0:00:01.360) 0:57:24.232 ******** 2026-01-30 06:45:38.501388 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501394 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501400 | orchestrator | 2026-01-30 06:45:38.501406 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:45:38.501412 | orchestrator | Friday 30 January 2026 06:45:31 +0000 (0:00:01.261) 0:57:25.493 ******** 2026-01-30 06:45:38.501418 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501424 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501431 | orchestrator | 2026-01-30 06:45:38.501437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:45:38.501443 | orchestrator | Friday 30 January 2026 06:45:33 +0000 (0:00:01.244) 0:57:26.738 ******** 2026-01-30 06:45:38.501454 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501460 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501466 | orchestrator | 2026-01-30 06:45:38.501472 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:45:38.501478 | orchestrator | Friday 30 January 2026 06:45:34 +0000 (0:00:01.627) 0:57:28.365 ******** 2026-01-30 06:45:38.501484 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501490 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501496 | orchestrator | 2026-01-30 06:45:38.501502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:45:38.501508 | orchestrator | Friday 30 January 2026 06:45:35 +0000 (0:00:01.222) 0:57:29.588 ******** 2026-01-30 06:45:38.501514 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501520 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501530 | orchestrator | 2026-01-30 06:45:38.501540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:45:38.501568 | orchestrator | Friday 30 January 2026 06:45:37 +0000 (0:00:01.260) 0:57:30.849 ******** 2026-01-30 06:45:38.501578 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:45:38.501589 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:45:38.501599 | orchestrator | 2026-01-30 06:45:38.501616 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:46:19.283845 | orchestrator | Friday 30 January 2026 06:45:38 +0000 (0:00:01.248) 0:57:32.097 ******** 2026-01-30 06:46:19.283957 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.283972 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.283982 | orchestrator | 2026-01-30 06:46:19.283993 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:46:19.284003 | orchestrator | Friday 30 January 2026 06:45:39 +0000 (0:00:01.257) 0:57:33.354 ******** 2026-01-30 06:46:19.284013 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:46:19.284023 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:46:19.284033 | orchestrator | 2026-01-30 06:46:19.284043 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:46:19.284053 | orchestrator | Friday 30 January 2026 06:45:41 +0000 (0:00:01.310) 0:57:34.665 ******** 2026-01-30 06:46:19.284063 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5, testbed-node-4 2026-01-30 06:46:19.284073 | orchestrator | 2026-01-30 06:46:19.284083 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:46:19.284092 | orchestrator | Friday 30 January 2026 06:45:42 +0000 (0:00:01.167) 0:57:35.832 ******** 2026-01-30 06:46:19.284102 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-01-30 06:46:19.284112 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-01-30 06:46:19.284121 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-30 06:46:19.284131 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-30 06:46:19.284140 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-30 06:46:19.284150 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-30 06:46:19.284159 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-30 06:46:19.284184 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-30 06:46:19.284194 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-30 06:46:19.284203 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-30 06:46:19.284213 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-30 06:46:19.284222 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-30 06:46:19.284232 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-30 06:46:19.284241 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-30 06:46:19.284251 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:46:19.284261 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:46:19.284291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:46:19.284301 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:46:19.284311 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:46:19.284320 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:46:19.284330 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:46:19.284339 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:46:19.284348 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:46:19.284358 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:46:19.284367 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:46:19.284377 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:46:19.284387 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:46:19.284399 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:46:19.284410 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-01-30 06:46:19.284422 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-01-30 06:46:19.284432 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-01-30 06:46:19.284443 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-01-30 06:46:19.284453 | orchestrator | 2026-01-30 06:46:19.284547 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:46:19.284560 | orchestrator | Friday 30 January 2026 06:45:48 +0000 (0:00:06.729) 0:57:42.562 ******** 2026-01-30 06:46:19.284572 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5, testbed-node-4 2026-01-30 06:46:19.284582 | orchestrator | 2026-01-30 06:46:19.284592 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:46:19.284601 | orchestrator | Friday 30 January 2026 06:45:50 +0000 (0:00:01.187) 0:57:43.749 ******** 2026-01-30 06:46:19.284611 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.284623 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.284633 | orchestrator | 2026-01-30 06:46:19.284642 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:46:19.284652 | orchestrator | Friday 30 January 2026 06:45:51 +0000 (0:00:01.549) 0:57:45.298 ******** 2026-01-30 06:46:19.284661 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.284672 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.284681 | orchestrator | 2026-01-30 06:46:19.284691 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:46:19.284717 | orchestrator | Friday 30 January 2026 06:45:53 +0000 (0:00:02.306) 0:57:47.605 ******** 2026-01-30 06:46:19.284727 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.284741 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.284759 | orchestrator | 2026-01-30 06:46:19.284780 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:46:19.284804 | orchestrator | Friday 30 January 2026 06:45:55 +0000 (0:00:01.257) 0:57:48.863 ******** 2026-01-30 06:46:19.284821 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.284855 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.284885 | orchestrator | 2026-01-30 06:46:19.284901 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:46:19.284918 | orchestrator | Friday 30 January 2026 06:45:56 +0000 (0:00:01.252) 0:57:50.115 ******** 2026-01-30 06:46:19.284944 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.284954 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.284963 | orchestrator | 2026-01-30 06:46:19.284972 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:46:19.284982 | orchestrator | Friday 30 January 2026 06:45:57 +0000 (0:00:01.316) 0:57:51.432 ******** 2026-01-30 06:46:19.285001 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285011 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285021 | orchestrator | 2026-01-30 06:46:19.285030 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:46:19.285040 | orchestrator | Friday 30 January 2026 06:45:59 +0000 (0:00:01.276) 0:57:52.708 ******** 2026-01-30 06:46:19.285049 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285059 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285068 | orchestrator | 2026-01-30 06:46:19.285078 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:46:19.285096 | orchestrator | Friday 30 January 2026 06:46:00 +0000 (0:00:01.215) 0:57:53.924 ******** 2026-01-30 06:46:19.285105 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285115 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285124 | orchestrator | 2026-01-30 06:46:19.285136 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:46:19.285154 | orchestrator | Friday 30 January 2026 06:46:01 +0000 (0:00:01.207) 0:57:55.131 ******** 2026-01-30 06:46:19.285177 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285197 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285213 | orchestrator | 2026-01-30 06:46:19.285229 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:46:19.285245 | orchestrator | Friday 30 January 2026 06:46:03 +0000 (0:00:01.556) 0:57:56.688 ******** 2026-01-30 06:46:19.285262 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285276 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285293 | orchestrator | 2026-01-30 06:46:19.285309 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:46:19.285327 | orchestrator | Friday 30 January 2026 06:46:04 +0000 (0:00:01.256) 0:57:57.944 ******** 2026-01-30 06:46:19.285346 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285364 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285382 | orchestrator | 2026-01-30 06:46:19.285400 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:46:19.285419 | orchestrator | Friday 30 January 2026 06:46:05 +0000 (0:00:01.231) 0:57:59.176 ******** 2026-01-30 06:46:19.285436 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285448 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285458 | orchestrator | 2026-01-30 06:46:19.285496 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:46:19.285506 | orchestrator | Friday 30 January 2026 06:46:06 +0000 (0:00:01.204) 0:58:00.380 ******** 2026-01-30 06:46:19.285516 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:46:19.285526 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:46:19.285535 | orchestrator | 2026-01-30 06:46:19.285544 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:46:19.285554 | orchestrator | Friday 30 January 2026 06:46:08 +0000 (0:00:01.246) 0:58:01.627 ******** 2026-01-30 06:46:19.285563 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:46:19.285572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:46:19.285582 | orchestrator | 2026-01-30 06:46:19.285591 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:46:19.285601 | orchestrator | Friday 30 January 2026 06:46:12 +0000 (0:00:04.758) 0:58:06.385 ******** 2026-01-30 06:46:19.285610 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.285631 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:46:19.285640 | orchestrator | 2026-01-30 06:46:19.285650 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:46:19.285659 | orchestrator | Friday 30 January 2026 06:46:14 +0000 (0:00:01.357) 0:58:07.743 ******** 2026-01-30 06:46:19.285671 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-01-30 06:46:19.285695 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-01-30 06:47:08.566918 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-01-30 06:47:08.567012 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-01-30 06:47:08.567026 | orchestrator | 2026-01-30 06:47:08.567039 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:47:08.567051 | orchestrator | Friday 30 January 2026 06:46:19 +0000 (0:00:05.139) 0:58:12.882 ******** 2026-01-30 06:47:08.567063 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567076 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567087 | orchestrator | 2026-01-30 06:47:08.567094 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:47:08.567101 | orchestrator | Friday 30 January 2026 06:46:20 +0000 (0:00:01.229) 0:58:14.112 ******** 2026-01-30 06:47:08.567107 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567125 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567132 | orchestrator | 2026-01-30 06:47:08.567140 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:47:08.567148 | orchestrator | Friday 30 January 2026 06:46:21 +0000 (0:00:01.251) 0:58:15.364 ******** 2026-01-30 06:47:08.567154 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567160 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567166 | orchestrator | 2026-01-30 06:47:08.567172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:47:08.567178 | orchestrator | Friday 30 January 2026 06:46:23 +0000 (0:00:01.252) 0:58:16.617 ******** 2026-01-30 06:47:08.567185 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567191 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567197 | orchestrator | 2026-01-30 06:47:08.567204 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:47:08.567210 | orchestrator | Friday 30 January 2026 06:46:24 +0000 (0:00:01.283) 0:58:17.901 ******** 2026-01-30 06:47:08.567216 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567222 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567228 | orchestrator | 2026-01-30 06:47:08.567234 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:47:08.567256 | orchestrator | Friday 30 January 2026 06:46:25 +0000 (0:00:01.242) 0:58:19.144 ******** 2026-01-30 06:47:08.567263 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.567270 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.567276 | orchestrator | 2026-01-30 06:47:08.567283 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:47:08.567289 | orchestrator | Friday 30 January 2026 06:46:27 +0000 (0:00:01.744) 0:58:20.888 ******** 2026-01-30 06:47:08.567295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:47:08.567302 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:47:08.567308 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:47:08.567314 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567320 | orchestrator | 2026-01-30 06:47:08.567326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:47:08.567332 | orchestrator | Friday 30 January 2026 06:46:28 +0000 (0:00:01.444) 0:58:22.333 ******** 2026-01-30 06:47:08.567338 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:47:08.567345 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:47:08.567351 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:47:08.567357 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567430 | orchestrator | 2026-01-30 06:47:08.567439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:47:08.567445 | orchestrator | Friday 30 January 2026 06:46:30 +0000 (0:00:01.433) 0:58:23.767 ******** 2026-01-30 06:47:08.567451 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:47:08.567457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:47:08.567463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:47:08.567471 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567478 | orchestrator | 2026-01-30 06:47:08.567485 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:47:08.567492 | orchestrator | Friday 30 January 2026 06:46:31 +0000 (0:00:01.417) 0:58:25.184 ******** 2026-01-30 06:47:08.567499 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.567507 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.567514 | orchestrator | 2026-01-30 06:47:08.567521 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:47:08.567529 | orchestrator | Friday 30 January 2026 06:46:32 +0000 (0:00:01.240) 0:58:26.424 ******** 2026-01-30 06:47:08.567536 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 06:47:08.567543 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:47:08.567550 | orchestrator | 2026-01-30 06:47:08.567557 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:47:08.567565 | orchestrator | Friday 30 January 2026 06:46:34 +0000 (0:00:01.475) 0:58:27.900 ******** 2026-01-30 06:47:08.567574 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.567582 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.567590 | orchestrator | 2026-01-30 06:47:08.567613 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-30 06:47:08.567622 | orchestrator | Friday 30 January 2026 06:46:36 +0000 (0:00:02.089) 0:58:29.990 ******** 2026-01-30 06:47:08.567630 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.567638 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.567646 | orchestrator | 2026-01-30 06:47:08.567655 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-30 06:47:08.567674 | orchestrator | Friday 30 January 2026 06:46:37 +0000 (0:00:01.287) 0:58:31.278 ******** 2026-01-30 06:47:08.567691 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5, testbed-node-4 2026-01-30 06:47:08.567701 | orchestrator | 2026-01-30 06:47:08.567709 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-30 06:47:08.567725 | orchestrator | Friday 30 January 2026 06:46:38 +0000 (0:00:01.284) 0:58:32.563 ******** 2026-01-30 06:47:08.567738 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 06:47:08.567753 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-30 06:47:08.567771 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-30 06:47:08.567783 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-30 06:47:08.567795 | orchestrator | 2026-01-30 06:47:08.567807 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-30 06:47:08.567820 | orchestrator | Friday 30 January 2026 06:46:40 +0000 (0:00:01.958) 0:58:34.521 ******** 2026-01-30 06:47:08.567842 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:47:08.567855 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 06:47:08.567864 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:47:08.567871 | orchestrator | 2026-01-30 06:47:08.567878 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:47:08.567885 | orchestrator | Friday 30 January 2026 06:46:44 +0000 (0:00:03.350) 0:58:37.871 ******** 2026-01-30 06:47:08.567893 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-01-30 06:47:08.567900 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 06:47:08.567907 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.567914 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-01-30 06:47:08.567922 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 06:47:08.567929 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.567936 | orchestrator | 2026-01-30 06:47:08.567943 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-30 06:47:08.567954 | orchestrator | Friday 30 January 2026 06:46:46 +0000 (0:00:02.208) 0:58:40.080 ******** 2026-01-30 06:47:08.567966 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.567978 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.567990 | orchestrator | 2026-01-30 06:47:08.568001 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-30 06:47:08.568013 | orchestrator | Friday 30 January 2026 06:46:48 +0000 (0:00:02.180) 0:58:42.260 ******** 2026-01-30 06:47:08.568024 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.568035 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:47:08.568046 | orchestrator | 2026-01-30 06:47:08.568057 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-30 06:47:08.568069 | orchestrator | Friday 30 January 2026 06:46:49 +0000 (0:00:01.268) 0:58:43.529 ******** 2026-01-30 06:47:08.568080 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-4 2026-01-30 06:47:08.568092 | orchestrator | 2026-01-30 06:47:08.568104 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-30 06:47:08.568115 | orchestrator | Friday 30 January 2026 06:46:51 +0000 (0:00:01.221) 0:58:44.751 ******** 2026-01-30 06:47:08.568127 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5, testbed-node-4 2026-01-30 06:47:08.568139 | orchestrator | 2026-01-30 06:47:08.568152 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-30 06:47:08.568163 | orchestrator | Friday 30 January 2026 06:46:52 +0000 (0:00:01.223) 0:58:45.975 ******** 2026-01-30 06:47:08.568172 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.568183 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.568193 | orchestrator | 2026-01-30 06:47:08.568204 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-30 06:47:08.568215 | orchestrator | Friday 30 January 2026 06:46:54 +0000 (0:00:02.192) 0:58:48.167 ******** 2026-01-30 06:47:08.568226 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.568237 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.568248 | orchestrator | 2026-01-30 06:47:08.568259 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-30 06:47:08.568282 | orchestrator | Friday 30 January 2026 06:46:56 +0000 (0:00:02.404) 0:58:50.571 ******** 2026-01-30 06:47:08.568294 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.568305 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.568317 | orchestrator | 2026-01-30 06:47:08.568329 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-30 06:47:08.568340 | orchestrator | Friday 30 January 2026 06:46:59 +0000 (0:00:02.518) 0:58:53.089 ******** 2026-01-30 06:47:08.568352 | orchestrator | changed: [testbed-node-5] 2026-01-30 06:47:08.568386 | orchestrator | changed: [testbed-node-4] 2026-01-30 06:47:08.568400 | orchestrator | 2026-01-30 06:47:08.568411 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-30 06:47:08.568422 | orchestrator | Friday 30 January 2026 06:47:03 +0000 (0:00:03.716) 0:58:56.806 ******** 2026-01-30 06:47:08.568434 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:47:08.568447 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:47:08.568459 | orchestrator | 2026-01-30 06:47:08.568471 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-01-30 06:47:08.568483 | orchestrator | Friday 30 January 2026 06:47:05 +0000 (0:00:01.873) 0:58:58.679 ******** 2026-01-30 06:47:08.568496 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:47:08.568518 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:47:32.303699 | orchestrator | 2026-01-30 06:47:32.303815 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-01-30 06:47:32.303831 | orchestrator | 2026-01-30 06:47:32.303842 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:47:32.303853 | orchestrator | Friday 30 January 2026 06:47:08 +0000 (0:00:03.485) 0:59:02.164 ******** 2026-01-30 06:47:32.303863 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-01-30 06:47:32.303874 | orchestrator | 2026-01-30 06:47:32.303884 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:47:32.303895 | orchestrator | Friday 30 January 2026 06:47:09 +0000 (0:00:01.326) 0:59:03.491 ******** 2026-01-30 06:47:32.303905 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.303916 | orchestrator | 2026-01-30 06:47:32.303926 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:47:32.303937 | orchestrator | Friday 30 January 2026 06:47:11 +0000 (0:00:01.487) 0:59:04.979 ******** 2026-01-30 06:47:32.303947 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.303958 | orchestrator | 2026-01-30 06:47:32.303968 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:47:32.303978 | orchestrator | Friday 30 January 2026 06:47:12 +0000 (0:00:01.130) 0:59:06.110 ******** 2026-01-30 06:47:32.303989 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.303999 | orchestrator | 2026-01-30 06:47:32.304009 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:47:32.304019 | orchestrator | Friday 30 January 2026 06:47:13 +0000 (0:00:01.425) 0:59:07.535 ******** 2026-01-30 06:47:32.304029 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.304040 | orchestrator | 2026-01-30 06:47:32.304066 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:47:32.304077 | orchestrator | Friday 30 January 2026 06:47:15 +0000 (0:00:01.131) 0:59:08.667 ******** 2026-01-30 06:47:32.304088 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.304098 | orchestrator | 2026-01-30 06:47:32.304108 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:47:32.304119 | orchestrator | Friday 30 January 2026 06:47:16 +0000 (0:00:01.146) 0:59:09.814 ******** 2026-01-30 06:47:32.304129 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.304140 | orchestrator | 2026-01-30 06:47:32.304150 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:47:32.304161 | orchestrator | Friday 30 January 2026 06:47:17 +0000 (0:00:01.146) 0:59:10.960 ******** 2026-01-30 06:47:32.304194 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:32.304206 | orchestrator | 2026-01-30 06:47:32.304216 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:47:32.304227 | orchestrator | Friday 30 January 2026 06:47:18 +0000 (0:00:01.119) 0:59:12.080 ******** 2026-01-30 06:47:32.304238 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.304248 | orchestrator | 2026-01-30 06:47:32.304258 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:47:32.304269 | orchestrator | Friday 30 January 2026 06:47:19 +0000 (0:00:01.121) 0:59:13.201 ******** 2026-01-30 06:47:32.304280 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:47:32.304290 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:47:32.304300 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:47:32.304311 | orchestrator | 2026-01-30 06:47:32.304375 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:47:32.304386 | orchestrator | Friday 30 January 2026 06:47:21 +0000 (0:00:02.053) 0:59:15.254 ******** 2026-01-30 06:47:32.304396 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:32.304405 | orchestrator | 2026-01-30 06:47:32.304414 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:47:32.304424 | orchestrator | Friday 30 January 2026 06:47:22 +0000 (0:00:01.301) 0:59:16.556 ******** 2026-01-30 06:47:32.304433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:47:32.304443 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:47:32.304452 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:47:32.304461 | orchestrator | 2026-01-30 06:47:32.304470 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:47:32.304479 | orchestrator | Friday 30 January 2026 06:47:26 +0000 (0:00:03.329) 0:59:19.885 ******** 2026-01-30 06:47:32.304489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:47:32.304498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:47:32.304507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:47:32.304516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:32.304526 | orchestrator | 2026-01-30 06:47:32.304535 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:47:32.304544 | orchestrator | Friday 30 January 2026 06:47:28 +0000 (0:00:01.910) 0:59:21.796 ******** 2026-01-30 06:47:32.304555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304568 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304608 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:32.304617 | orchestrator | 2026-01-30 06:47:32.304627 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:47:32.304637 | orchestrator | Friday 30 January 2026 06:47:29 +0000 (0:00:01.724) 0:59:23.521 ******** 2026-01-30 06:47:32.304648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304688 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:32.304698 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:32.304707 | orchestrator | 2026-01-30 06:47:32.304717 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:47:32.304727 | orchestrator | Friday 30 January 2026 06:47:31 +0000 (0:00:01.184) 0:59:24.706 ******** 2026-01-30 06:47:32.304739 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:47:23.941062', 'end': '2026-01-30 06:47:23.988104', 'delta': '0:00:00.047042', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:47:32.304753 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:47:24.523094', 'end': '2026-01-30 06:47:24.575958', 'delta': '0:00:00.052864', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:47:32.304763 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:47:25.076578', 'end': '2026-01-30 06:47:25.119486', 'delta': '0:00:00.042908', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:47:32.304773 | orchestrator | 2026-01-30 06:47:32.304790 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:47:49.894875 | orchestrator | Friday 30 January 2026 06:47:32 +0000 (0:00:01.195) 0:59:25.901 ******** 2026-01-30 06:47:49.894959 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.894981 | orchestrator | 2026-01-30 06:47:49.894987 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:47:49.895002 | orchestrator | Friday 30 January 2026 06:47:33 +0000 (0:00:01.330) 0:59:27.232 ******** 2026-01-30 06:47:49.895007 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895012 | orchestrator | 2026-01-30 06:47:49.895016 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:47:49.895021 | orchestrator | Friday 30 January 2026 06:47:34 +0000 (0:00:01.280) 0:59:28.512 ******** 2026-01-30 06:47:49.895024 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.895035 | orchestrator | 2026-01-30 06:47:49.895039 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:47:49.895043 | orchestrator | Friday 30 January 2026 06:47:36 +0000 (0:00:01.120) 0:59:29.633 ******** 2026-01-30 06:47:49.895047 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:47:49.895051 | orchestrator | 2026-01-30 06:47:49.895055 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:47:49.895059 | orchestrator | Friday 30 January 2026 06:47:38 +0000 (0:00:02.040) 0:59:31.673 ******** 2026-01-30 06:47:49.895063 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.895066 | orchestrator | 2026-01-30 06:47:49.895070 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:47:49.895084 | orchestrator | Friday 30 January 2026 06:47:39 +0000 (0:00:01.142) 0:59:32.816 ******** 2026-01-30 06:47:49.895088 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895092 | orchestrator | 2026-01-30 06:47:49.895096 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:47:49.895100 | orchestrator | Friday 30 January 2026 06:47:40 +0000 (0:00:01.121) 0:59:33.938 ******** 2026-01-30 06:47:49.895104 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895107 | orchestrator | 2026-01-30 06:47:49.895111 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:47:49.895115 | orchestrator | Friday 30 January 2026 06:47:41 +0000 (0:00:01.241) 0:59:35.179 ******** 2026-01-30 06:47:49.895119 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895123 | orchestrator | 2026-01-30 06:47:49.895127 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:47:49.895131 | orchestrator | Friday 30 January 2026 06:47:42 +0000 (0:00:01.102) 0:59:36.281 ******** 2026-01-30 06:47:49.895134 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895138 | orchestrator | 2026-01-30 06:47:49.895143 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:47:49.895147 | orchestrator | Friday 30 January 2026 06:47:43 +0000 (0:00:01.124) 0:59:37.406 ******** 2026-01-30 06:47:49.895151 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.895154 | orchestrator | 2026-01-30 06:47:49.895158 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:47:49.895162 | orchestrator | Friday 30 January 2026 06:47:45 +0000 (0:00:01.220) 0:59:38.626 ******** 2026-01-30 06:47:49.895166 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895170 | orchestrator | 2026-01-30 06:47:49.895174 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:47:49.895178 | orchestrator | Friday 30 January 2026 06:47:46 +0000 (0:00:01.172) 0:59:39.799 ******** 2026-01-30 06:47:49.895181 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.895185 | orchestrator | 2026-01-30 06:47:49.895189 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:47:49.895205 | orchestrator | Friday 30 January 2026 06:47:47 +0000 (0:00:01.163) 0:59:40.963 ******** 2026-01-30 06:47:49.895209 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:49.895213 | orchestrator | 2026-01-30 06:47:49.895217 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:47:49.895222 | orchestrator | Friday 30 January 2026 06:47:48 +0000 (0:00:01.099) 0:59:42.062 ******** 2026-01-30 06:47:49.895230 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:47:49.895234 | orchestrator | 2026-01-30 06:47:49.895238 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:47:49.895242 | orchestrator | Friday 30 January 2026 06:47:49 +0000 (0:00:01.187) 0:59:43.250 ******** 2026-01-30 06:47:49.895247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:49.895255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}})  2026-01-30 06:47:49.895272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:47:49.895281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}})  2026-01-30 06:47:49.895328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:49.895333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:49.895338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:47:49.895347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:49.895351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:47:49.895360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:51.241659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}})  2026-01-30 06:47:51.241747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}})  2026-01-30 06:47:51.241755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:51.241766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:47:51.241798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:51.241803 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:47:51.241812 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:47:51.241818 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:47:51.241824 | orchestrator | 2026-01-30 06:47:51.241828 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:47:51.241834 | orchestrator | Friday 30 January 2026 06:47:50 +0000 (0:00:01.356) 0:59:44.606 ******** 2026-01-30 06:47:51.241840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:51.241849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b', 'dm-uuid-LVM-pkgr33ovn4zTsGvGBwe1sKdyyLPHeMlO4cNZbD5o9w7hQxVDPpfOETcVwQImoLfA'], 'uuids': ['818e3b96-1bdd-42c6-b020-ad533e9dbd9f'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:51.241855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db', 'scsi-SQEMU_QEMU_HARDDISK_89867505-ff36-4695-8b18-6c1e230d96db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '89867505', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:51.241865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-duz8ll-JZYI-sgb0-wmzh-zFPL-PQv7-15PJTT', 'scsi-0QEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e', 'scsi-SQEMU_QEMU_HARDDISK_ac342dcc-6378-474e-8bd4-fa421e59d21e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387253 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-08-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587', 'dm-uuid-CRYPT-LUKS2-739b907ede5f4f48b6215697c64bb966-QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0-osd--block--8ea9dc5c--1d02--5b7a--b23f--cb4648b979f0', 'dm-uuid-LVM-eE31lxqI0hQheF1GLJhgpEhyyPVp791kQIMeFskpf2TM8FeGhHf5mYjaNYbGj587'], 'uuids': ['739b907e-de5f-4f48-b621-5697c64bb966'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ac342dcc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QIMeFs-kpf2-TM8F-eGhH-f5mY-jaNY-bGj587']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-tEJ8NN-nEAY-X0Qu-ptIC-5Us1-KcS7-kfh1M4', 'scsi-0QEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c', 'scsi-SQEMU_QEMU_HARDDISK_f069451a-3954-45d9-86d9-4bd6a8a4900c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f069451a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b-osd--block--a8f13564--aa0f--525b--b1f5--f4cdb3fdc88b']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:47:52.387401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '45889879', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1', 'scsi-SQEMU_QEMU_HARDDISK_45889879-29ea-4e0d-a22d-11f14312e02a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:48:21.797488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:48:21.797669 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:48:21.797690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA', 'dm-uuid-CRYPT-LUKS2-818e3b961bdd42c6b020ad533e9dbd9f-4cNZbD-5o9w-7hQx-VDPp-fOET-cVwQ-ImoLfA'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:48:21.797704 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.797717 | orchestrator | 2026-01-30 06:48:21.797730 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:48:21.797742 | orchestrator | Friday 30 January 2026 06:47:52 +0000 (0:00:01.385) 0:59:45.992 ******** 2026-01-30 06:48:21.797753 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:48:21.797766 | orchestrator | 2026-01-30 06:48:21.797777 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:48:21.797788 | orchestrator | Friday 30 January 2026 06:47:53 +0000 (0:00:01.484) 0:59:47.476 ******** 2026-01-30 06:48:21.797799 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:48:21.797810 | orchestrator | 2026-01-30 06:48:21.797821 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:48:21.797832 | orchestrator | Friday 30 January 2026 06:47:54 +0000 (0:00:01.106) 0:59:48.583 ******** 2026-01-30 06:48:21.797843 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:48:21.797854 | orchestrator | 2026-01-30 06:48:21.797865 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:48:21.797876 | orchestrator | Friday 30 January 2026 06:47:56 +0000 (0:00:01.487) 0:59:50.070 ******** 2026-01-30 06:48:21.797887 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.797898 | orchestrator | 2026-01-30 06:48:21.797910 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:48:21.797921 | orchestrator | Friday 30 January 2026 06:47:57 +0000 (0:00:01.116) 0:59:51.187 ******** 2026-01-30 06:48:21.797931 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.797942 | orchestrator | 2026-01-30 06:48:21.797953 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:48:21.797964 | orchestrator | Friday 30 January 2026 06:47:58 +0000 (0:00:01.308) 0:59:52.495 ******** 2026-01-30 06:48:21.797976 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.797988 | orchestrator | 2026-01-30 06:48:21.797998 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:48:21.798105 | orchestrator | Friday 30 January 2026 06:48:00 +0000 (0:00:01.164) 0:59:53.659 ******** 2026-01-30 06:48:21.798120 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-30 06:48:21.798133 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-30 06:48:21.798145 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-30 06:48:21.798161 | orchestrator | 2026-01-30 06:48:21.798201 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:48:21.798336 | orchestrator | Friday 30 January 2026 06:48:02 +0000 (0:00:02.208) 0:59:55.868 ******** 2026-01-30 06:48:21.798363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-30 06:48:21.798381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-30 06:48:21.798400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-30 06:48:21.798419 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798436 | orchestrator | 2026-01-30 06:48:21.798457 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:48:21.798475 | orchestrator | Friday 30 January 2026 06:48:03 +0000 (0:00:01.213) 0:59:57.081 ******** 2026-01-30 06:48:21.798517 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-01-30 06:48:21.798531 | orchestrator | 2026-01-30 06:48:21.798542 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:48:21.798553 | orchestrator | Friday 30 January 2026 06:48:04 +0000 (0:00:01.129) 0:59:58.211 ******** 2026-01-30 06:48:21.798563 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798572 | orchestrator | 2026-01-30 06:48:21.798582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:48:21.798591 | orchestrator | Friday 30 January 2026 06:48:05 +0000 (0:00:01.154) 0:59:59.365 ******** 2026-01-30 06:48:21.798601 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798610 | orchestrator | 2026-01-30 06:48:21.798620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:48:21.798629 | orchestrator | Friday 30 January 2026 06:48:06 +0000 (0:00:01.168) 1:00:00.534 ******** 2026-01-30 06:48:21.798638 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798648 | orchestrator | 2026-01-30 06:48:21.798657 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:48:21.798667 | orchestrator | Friday 30 January 2026 06:48:08 +0000 (0:00:01.164) 1:00:01.698 ******** 2026-01-30 06:48:21.798677 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:48:21.798686 | orchestrator | 2026-01-30 06:48:21.798696 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:48:21.798705 | orchestrator | Friday 30 January 2026 06:48:09 +0000 (0:00:01.217) 1:00:02.916 ******** 2026-01-30 06:48:21.798715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:48:21.798725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:48:21.798734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:48:21.798744 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798753 | orchestrator | 2026-01-30 06:48:21.798762 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:48:21.798772 | orchestrator | Friday 30 January 2026 06:48:10 +0000 (0:00:01.444) 1:00:04.361 ******** 2026-01-30 06:48:21.798781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:48:21.798791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:48:21.798800 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:48:21.798810 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798819 | orchestrator | 2026-01-30 06:48:21.798829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:48:21.798838 | orchestrator | Friday 30 January 2026 06:48:12 +0000 (0:00:01.477) 1:00:05.839 ******** 2026-01-30 06:48:21.798860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:48:21.798870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:48:21.798880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:48:21.798889 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:48:21.798899 | orchestrator | 2026-01-30 06:48:21.798908 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:48:21.798918 | orchestrator | Friday 30 January 2026 06:48:13 +0000 (0:00:01.374) 1:00:07.213 ******** 2026-01-30 06:48:21.798927 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:48:21.798937 | orchestrator | 2026-01-30 06:48:21.798946 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:48:21.798956 | orchestrator | Friday 30 January 2026 06:48:14 +0000 (0:00:01.152) 1:00:08.366 ******** 2026-01-30 06:48:21.798965 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:48:21.798974 | orchestrator | 2026-01-30 06:48:21.798984 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:48:21.798993 | orchestrator | Friday 30 January 2026 06:48:16 +0000 (0:00:01.756) 1:00:10.123 ******** 2026-01-30 06:48:21.799003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:48:21.799012 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:48:21.799022 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:48:21.799031 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:48:21.799040 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:48:21.799050 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:48:21.799059 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:48:21.799069 | orchestrator | 2026-01-30 06:48:21.799078 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:48:21.799088 | orchestrator | Friday 30 January 2026 06:48:18 +0000 (0:00:02.227) 1:00:12.350 ******** 2026-01-30 06:48:21.799097 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:48:21.799115 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:48:21.799124 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:48:21.799134 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-30 06:48:21.799143 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:48:21.799153 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:48:21.799162 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:48:21.799172 | orchestrator | 2026-01-30 06:48:21.799188 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-01-30 06:49:14.586826 | orchestrator | Friday 30 January 2026 06:48:21 +0000 (0:00:03.039) 1:00:15.390 ******** 2026-01-30 06:49:14.586977 | orchestrator | changed: [testbed-node-3] 2026-01-30 06:49:14.586993 | orchestrator | 2026-01-30 06:49:14.587002 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-01-30 06:49:14.587009 | orchestrator | Friday 30 January 2026 06:48:24 +0000 (0:00:02.388) 1:00:17.779 ******** 2026-01-30 06:49:14.587017 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:49:14.587025 | orchestrator | 2026-01-30 06:49:14.587032 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-01-30 06:49:14.587039 | orchestrator | Friday 30 January 2026 06:48:27 +0000 (0:00:02.873) 1:00:20.653 ******** 2026-01-30 06:49:14.587074 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:49:14.587080 | orchestrator | 2026-01-30 06:49:14.587086 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:49:14.587092 | orchestrator | Friday 30 January 2026 06:48:29 +0000 (0:00:02.255) 1:00:22.908 ******** 2026-01-30 06:49:14.587098 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-01-30 06:49:14.587105 | orchestrator | 2026-01-30 06:49:14.587110 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:49:14.587116 | orchestrator | Friday 30 January 2026 06:48:30 +0000 (0:00:01.149) 1:00:24.058 ******** 2026-01-30 06:49:14.587122 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-01-30 06:49:14.587129 | orchestrator | 2026-01-30 06:49:14.587135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:49:14.587200 | orchestrator | Friday 30 January 2026 06:48:31 +0000 (0:00:01.141) 1:00:25.200 ******** 2026-01-30 06:49:14.587209 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587215 | orchestrator | 2026-01-30 06:49:14.587222 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:49:14.587228 | orchestrator | Friday 30 January 2026 06:48:32 +0000 (0:00:01.107) 1:00:26.308 ******** 2026-01-30 06:49:14.587234 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587242 | orchestrator | 2026-01-30 06:49:14.587250 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:49:14.587257 | orchestrator | Friday 30 January 2026 06:48:34 +0000 (0:00:01.576) 1:00:27.885 ******** 2026-01-30 06:49:14.587263 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587270 | orchestrator | 2026-01-30 06:49:14.587277 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:49:14.587283 | orchestrator | Friday 30 January 2026 06:48:35 +0000 (0:00:01.541) 1:00:29.426 ******** 2026-01-30 06:49:14.587291 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587298 | orchestrator | 2026-01-30 06:49:14.587306 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:49:14.587313 | orchestrator | Friday 30 January 2026 06:48:37 +0000 (0:00:01.535) 1:00:30.962 ******** 2026-01-30 06:49:14.587320 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587326 | orchestrator | 2026-01-30 06:49:14.587333 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:49:14.587338 | orchestrator | Friday 30 January 2026 06:48:38 +0000 (0:00:01.254) 1:00:32.216 ******** 2026-01-30 06:49:14.587343 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587348 | orchestrator | 2026-01-30 06:49:14.587354 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:49:14.587359 | orchestrator | Friday 30 January 2026 06:48:39 +0000 (0:00:01.181) 1:00:33.397 ******** 2026-01-30 06:49:14.587364 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587369 | orchestrator | 2026-01-30 06:49:14.587374 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:49:14.587379 | orchestrator | Friday 30 January 2026 06:48:40 +0000 (0:00:01.155) 1:00:34.553 ******** 2026-01-30 06:49:14.587384 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587389 | orchestrator | 2026-01-30 06:49:14.587395 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:49:14.587400 | orchestrator | Friday 30 January 2026 06:48:42 +0000 (0:00:01.540) 1:00:36.093 ******** 2026-01-30 06:49:14.587405 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587410 | orchestrator | 2026-01-30 06:49:14.587415 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:49:14.587420 | orchestrator | Friday 30 January 2026 06:48:44 +0000 (0:00:01.567) 1:00:37.661 ******** 2026-01-30 06:49:14.587425 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587439 | orchestrator | 2026-01-30 06:49:14.587444 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:49:14.587448 | orchestrator | Friday 30 January 2026 06:48:45 +0000 (0:00:01.171) 1:00:38.833 ******** 2026-01-30 06:49:14.587453 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587458 | orchestrator | 2026-01-30 06:49:14.587463 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:49:14.587483 | orchestrator | Friday 30 January 2026 06:48:46 +0000 (0:00:01.098) 1:00:39.931 ******** 2026-01-30 06:49:14.587489 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587493 | orchestrator | 2026-01-30 06:49:14.587498 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:49:14.587503 | orchestrator | Friday 30 January 2026 06:48:47 +0000 (0:00:01.124) 1:00:41.056 ******** 2026-01-30 06:49:14.587508 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587513 | orchestrator | 2026-01-30 06:49:14.587518 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:49:14.587523 | orchestrator | Friday 30 January 2026 06:48:48 +0000 (0:00:01.188) 1:00:42.244 ******** 2026-01-30 06:49:14.587528 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587533 | orchestrator | 2026-01-30 06:49:14.587557 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:49:14.587562 | orchestrator | Friday 30 January 2026 06:48:49 +0000 (0:00:01.156) 1:00:43.401 ******** 2026-01-30 06:49:14.587567 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587572 | orchestrator | 2026-01-30 06:49:14.587577 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:49:14.587582 | orchestrator | Friday 30 January 2026 06:48:50 +0000 (0:00:01.120) 1:00:44.521 ******** 2026-01-30 06:49:14.587586 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587591 | orchestrator | 2026-01-30 06:49:14.587597 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:49:14.587602 | orchestrator | Friday 30 January 2026 06:48:52 +0000 (0:00:01.124) 1:00:45.646 ******** 2026-01-30 06:49:14.587607 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587611 | orchestrator | 2026-01-30 06:49:14.587616 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:49:14.587621 | orchestrator | Friday 30 January 2026 06:48:53 +0000 (0:00:01.130) 1:00:46.776 ******** 2026-01-30 06:49:14.587626 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587631 | orchestrator | 2026-01-30 06:49:14.587636 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:49:14.587641 | orchestrator | Friday 30 January 2026 06:48:54 +0000 (0:00:01.231) 1:00:48.007 ******** 2026-01-30 06:49:14.587646 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587651 | orchestrator | 2026-01-30 06:49:14.587656 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:49:14.587661 | orchestrator | Friday 30 January 2026 06:48:55 +0000 (0:00:01.150) 1:00:49.158 ******** 2026-01-30 06:49:14.587666 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587670 | orchestrator | 2026-01-30 06:49:14.587676 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:49:14.587680 | orchestrator | Friday 30 January 2026 06:48:56 +0000 (0:00:01.147) 1:00:50.308 ******** 2026-01-30 06:49:14.587685 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587690 | orchestrator | 2026-01-30 06:49:14.587695 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:49:14.587700 | orchestrator | Friday 30 January 2026 06:48:57 +0000 (0:00:01.130) 1:00:51.438 ******** 2026-01-30 06:49:14.587705 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587709 | orchestrator | 2026-01-30 06:49:14.587713 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:49:14.587718 | orchestrator | Friday 30 January 2026 06:48:58 +0000 (0:00:01.162) 1:00:52.600 ******** 2026-01-30 06:49:14.587722 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587732 | orchestrator | 2026-01-30 06:49:14.587736 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:49:14.587740 | orchestrator | Friday 30 January 2026 06:49:00 +0000 (0:00:01.121) 1:00:53.722 ******** 2026-01-30 06:49:14.587744 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587749 | orchestrator | 2026-01-30 06:49:14.587753 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:49:14.587757 | orchestrator | Friday 30 January 2026 06:49:01 +0000 (0:00:01.172) 1:00:54.894 ******** 2026-01-30 06:49:14.587761 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587766 | orchestrator | 2026-01-30 06:49:14.587770 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:49:14.587774 | orchestrator | Friday 30 January 2026 06:49:02 +0000 (0:00:01.115) 1:00:56.009 ******** 2026-01-30 06:49:14.587779 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587783 | orchestrator | 2026-01-30 06:49:14.587787 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:49:14.587793 | orchestrator | Friday 30 January 2026 06:49:03 +0000 (0:00:01.109) 1:00:57.119 ******** 2026-01-30 06:49:14.587797 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587801 | orchestrator | 2026-01-30 06:49:14.587806 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:49:14.587810 | orchestrator | Friday 30 January 2026 06:49:04 +0000 (0:00:01.118) 1:00:58.238 ******** 2026-01-30 06:49:14.587814 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587818 | orchestrator | 2026-01-30 06:49:14.587823 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:49:14.587827 | orchestrator | Friday 30 January 2026 06:49:05 +0000 (0:00:01.117) 1:00:59.355 ******** 2026-01-30 06:49:14.587831 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587835 | orchestrator | 2026-01-30 06:49:14.587840 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:49:14.587844 | orchestrator | Friday 30 January 2026 06:49:06 +0000 (0:00:01.120) 1:01:00.476 ******** 2026-01-30 06:49:14.587848 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587853 | orchestrator | 2026-01-30 06:49:14.587858 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:49:14.587865 | orchestrator | Friday 30 January 2026 06:49:08 +0000 (0:00:01.233) 1:01:01.709 ******** 2026-01-30 06:49:14.587874 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:49:14.587884 | orchestrator | 2026-01-30 06:49:14.587890 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:49:14.587896 | orchestrator | Friday 30 January 2026 06:49:09 +0000 (0:00:01.127) 1:01:02.837 ******** 2026-01-30 06:49:14.587908 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587914 | orchestrator | 2026-01-30 06:49:14.587921 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:49:14.587927 | orchestrator | Friday 30 January 2026 06:49:11 +0000 (0:00:01.980) 1:01:04.818 ******** 2026-01-30 06:49:14.587934 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:49:14.587941 | orchestrator | 2026-01-30 06:49:14.587948 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:49:14.587955 | orchestrator | Friday 30 January 2026 06:49:13 +0000 (0:00:02.260) 1:01:07.079 ******** 2026-01-30 06:49:14.587963 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-01-30 06:49:14.587970 | orchestrator | 2026-01-30 06:49:14.587978 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:49:14.587991 | orchestrator | Friday 30 January 2026 06:49:14 +0000 (0:00:01.105) 1:01:08.184 ******** 2026-01-30 06:50:01.276885 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277015 | orchestrator | 2026-01-30 06:50:01.277036 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:50:01.277048 | orchestrator | Friday 30 January 2026 06:49:15 +0000 (0:00:01.132) 1:01:09.317 ******** 2026-01-30 06:50:01.277138 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277155 | orchestrator | 2026-01-30 06:50:01.277168 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:50:01.277181 | orchestrator | Friday 30 January 2026 06:49:16 +0000 (0:00:01.105) 1:01:10.422 ******** 2026-01-30 06:50:01.277194 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:50:01.277206 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:50:01.277224 | orchestrator | 2026-01-30 06:50:01.277243 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:50:01.277255 | orchestrator | Friday 30 January 2026 06:49:18 +0000 (0:00:01.902) 1:01:12.324 ******** 2026-01-30 06:50:01.277266 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:01.277278 | orchestrator | 2026-01-30 06:50:01.277289 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:50:01.277301 | orchestrator | Friday 30 January 2026 06:49:20 +0000 (0:00:01.505) 1:01:13.830 ******** 2026-01-30 06:50:01.277313 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277325 | orchestrator | 2026-01-30 06:50:01.277338 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:50:01.277350 | orchestrator | Friday 30 January 2026 06:49:21 +0000 (0:00:01.132) 1:01:14.963 ******** 2026-01-30 06:50:01.277363 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277370 | orchestrator | 2026-01-30 06:50:01.277378 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:50:01.277385 | orchestrator | Friday 30 January 2026 06:49:22 +0000 (0:00:01.130) 1:01:16.094 ******** 2026-01-30 06:50:01.277392 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277400 | orchestrator | 2026-01-30 06:50:01.277407 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:50:01.277415 | orchestrator | Friday 30 January 2026 06:49:23 +0000 (0:00:01.217) 1:01:17.312 ******** 2026-01-30 06:50:01.277424 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-01-30 06:50:01.277434 | orchestrator | 2026-01-30 06:50:01.277442 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:50:01.277450 | orchestrator | Friday 30 January 2026 06:49:24 +0000 (0:00:01.117) 1:01:18.429 ******** 2026-01-30 06:50:01.277458 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:01.277466 | orchestrator | 2026-01-30 06:50:01.277475 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:50:01.277483 | orchestrator | Friday 30 January 2026 06:49:26 +0000 (0:00:01.692) 1:01:20.121 ******** 2026-01-30 06:50:01.277492 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:50:01.277500 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:50:01.277508 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:50:01.277516 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277524 | orchestrator | 2026-01-30 06:50:01.277532 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:50:01.277541 | orchestrator | Friday 30 January 2026 06:49:27 +0000 (0:00:01.137) 1:01:21.259 ******** 2026-01-30 06:50:01.277549 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277560 | orchestrator | 2026-01-30 06:50:01.277572 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:50:01.277583 | orchestrator | Friday 30 January 2026 06:49:28 +0000 (0:00:01.159) 1:01:22.418 ******** 2026-01-30 06:50:01.277595 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277607 | orchestrator | 2026-01-30 06:50:01.277620 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:50:01.277631 | orchestrator | Friday 30 January 2026 06:49:30 +0000 (0:00:01.202) 1:01:23.621 ******** 2026-01-30 06:50:01.277651 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277659 | orchestrator | 2026-01-30 06:50:01.277668 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:50:01.277677 | orchestrator | Friday 30 January 2026 06:49:31 +0000 (0:00:01.188) 1:01:24.809 ******** 2026-01-30 06:50:01.277685 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277693 | orchestrator | 2026-01-30 06:50:01.277702 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:50:01.277710 | orchestrator | Friday 30 January 2026 06:49:32 +0000 (0:00:01.125) 1:01:25.934 ******** 2026-01-30 06:50:01.277719 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277727 | orchestrator | 2026-01-30 06:50:01.277737 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:50:01.277745 | orchestrator | Friday 30 January 2026 06:49:33 +0000 (0:00:01.184) 1:01:27.119 ******** 2026-01-30 06:50:01.277765 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:01.277774 | orchestrator | 2026-01-30 06:50:01.277783 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:50:01.277791 | orchestrator | Friday 30 January 2026 06:49:35 +0000 (0:00:02.492) 1:01:29.611 ******** 2026-01-30 06:50:01.277799 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:01.277807 | orchestrator | 2026-01-30 06:50:01.277814 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:50:01.277821 | orchestrator | Friday 30 January 2026 06:49:37 +0000 (0:00:01.135) 1:01:30.747 ******** 2026-01-30 06:50:01.277828 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-01-30 06:50:01.277835 | orchestrator | 2026-01-30 06:50:01.277842 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:50:01.277867 | orchestrator | Friday 30 January 2026 06:49:38 +0000 (0:00:01.125) 1:01:31.873 ******** 2026-01-30 06:50:01.277875 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277882 | orchestrator | 2026-01-30 06:50:01.277889 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:50:01.277897 | orchestrator | Friday 30 January 2026 06:49:39 +0000 (0:00:01.241) 1:01:33.114 ******** 2026-01-30 06:50:01.277904 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277911 | orchestrator | 2026-01-30 06:50:01.277918 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:50:01.277925 | orchestrator | Friday 30 January 2026 06:49:40 +0000 (0:00:01.153) 1:01:34.268 ******** 2026-01-30 06:50:01.277932 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277939 | orchestrator | 2026-01-30 06:50:01.277947 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:50:01.277954 | orchestrator | Friday 30 January 2026 06:49:41 +0000 (0:00:01.119) 1:01:35.387 ******** 2026-01-30 06:50:01.277961 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277968 | orchestrator | 2026-01-30 06:50:01.277975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:50:01.277982 | orchestrator | Friday 30 January 2026 06:49:42 +0000 (0:00:01.134) 1:01:36.522 ******** 2026-01-30 06:50:01.277989 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.277997 | orchestrator | 2026-01-30 06:50:01.278004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:50:01.278011 | orchestrator | Friday 30 January 2026 06:49:44 +0000 (0:00:01.126) 1:01:37.649 ******** 2026-01-30 06:50:01.278102 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.278115 | orchestrator | 2026-01-30 06:50:01.278127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:50:01.278139 | orchestrator | Friday 30 January 2026 06:49:45 +0000 (0:00:01.114) 1:01:38.763 ******** 2026-01-30 06:50:01.278150 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.278161 | orchestrator | 2026-01-30 06:50:01.278169 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:50:01.278176 | orchestrator | Friday 30 January 2026 06:49:46 +0000 (0:00:01.156) 1:01:39.920 ******** 2026-01-30 06:50:01.278190 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:01.278197 | orchestrator | 2026-01-30 06:50:01.278205 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:50:01.278230 | orchestrator | Friday 30 January 2026 06:49:47 +0000 (0:00:01.138) 1:01:41.058 ******** 2026-01-30 06:50:01.278238 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:01.278254 | orchestrator | 2026-01-30 06:50:01.278261 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:50:01.278269 | orchestrator | Friday 30 January 2026 06:49:48 +0000 (0:00:01.150) 1:01:42.208 ******** 2026-01-30 06:50:01.278276 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-01-30 06:50:01.278284 | orchestrator | 2026-01-30 06:50:01.278291 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:50:01.278298 | orchestrator | Friday 30 January 2026 06:49:49 +0000 (0:00:01.121) 1:01:43.330 ******** 2026-01-30 06:50:01.278306 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-01-30 06:50:01.278314 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-30 06:50:01.278321 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-30 06:50:01.278328 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-30 06:50:01.278336 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-30 06:50:01.278343 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-30 06:50:01.278350 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-30 06:50:01.278357 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:50:01.278365 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:50:01.278372 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:50:01.278379 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:50:01.278386 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:50:01.278394 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:50:01.278401 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:50:01.278408 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-01-30 06:50:01.278416 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-01-30 06:50:01.278423 | orchestrator | 2026-01-30 06:50:01.278430 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:50:01.278437 | orchestrator | Friday 30 January 2026 06:49:56 +0000 (0:00:06.742) 1:01:50.072 ******** 2026-01-30 06:50:01.278445 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-01-30 06:50:01.278452 | orchestrator | 2026-01-30 06:50:01.278459 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:50:01.278472 | orchestrator | Friday 30 January 2026 06:49:57 +0000 (0:00:01.276) 1:01:51.349 ******** 2026-01-30 06:50:01.278479 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:50:01.278489 | orchestrator | 2026-01-30 06:50:01.278496 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:50:01.278503 | orchestrator | Friday 30 January 2026 06:49:59 +0000 (0:00:01.545) 1:01:52.895 ******** 2026-01-30 06:50:01.278510 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:50:01.278518 | orchestrator | 2026-01-30 06:50:01.278525 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:50:01.278539 | orchestrator | Friday 30 January 2026 06:50:01 +0000 (0:00:01.980) 1:01:54.875 ******** 2026-01-30 06:50:52.942972 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943161 | orchestrator | 2026-01-30 06:50:52.943229 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:50:52.943253 | orchestrator | Friday 30 January 2026 06:50:02 +0000 (0:00:01.158) 1:01:56.033 ******** 2026-01-30 06:50:52.943273 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943294 | orchestrator | 2026-01-30 06:50:52.943314 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:50:52.943334 | orchestrator | Friday 30 January 2026 06:50:03 +0000 (0:00:01.095) 1:01:57.128 ******** 2026-01-30 06:50:52.943346 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943356 | orchestrator | 2026-01-30 06:50:52.943367 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:50:52.943378 | orchestrator | Friday 30 January 2026 06:50:04 +0000 (0:00:01.106) 1:01:58.235 ******** 2026-01-30 06:50:52.943391 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943403 | orchestrator | 2026-01-30 06:50:52.943416 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:50:52.943428 | orchestrator | Friday 30 January 2026 06:50:05 +0000 (0:00:01.109) 1:01:59.344 ******** 2026-01-30 06:50:52.943440 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943452 | orchestrator | 2026-01-30 06:50:52.943465 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:50:52.943478 | orchestrator | Friday 30 January 2026 06:50:06 +0000 (0:00:01.115) 1:02:00.460 ******** 2026-01-30 06:50:52.943491 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943503 | orchestrator | 2026-01-30 06:50:52.943515 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:50:52.943527 | orchestrator | Friday 30 January 2026 06:50:07 +0000 (0:00:01.116) 1:02:01.577 ******** 2026-01-30 06:50:52.943540 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943551 | orchestrator | 2026-01-30 06:50:52.943563 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:50:52.943576 | orchestrator | Friday 30 January 2026 06:50:09 +0000 (0:00:01.164) 1:02:02.742 ******** 2026-01-30 06:50:52.943588 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943600 | orchestrator | 2026-01-30 06:50:52.943613 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:50:52.943625 | orchestrator | Friday 30 January 2026 06:50:10 +0000 (0:00:01.122) 1:02:03.864 ******** 2026-01-30 06:50:52.943636 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943647 | orchestrator | 2026-01-30 06:50:52.943658 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:50:52.943668 | orchestrator | Friday 30 January 2026 06:50:11 +0000 (0:00:01.106) 1:02:04.970 ******** 2026-01-30 06:50:52.943679 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943689 | orchestrator | 2026-01-30 06:50:52.943700 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:50:52.943710 | orchestrator | Friday 30 January 2026 06:50:12 +0000 (0:00:01.210) 1:02:06.180 ******** 2026-01-30 06:50:52.943721 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943731 | orchestrator | 2026-01-30 06:50:52.943742 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:50:52.943752 | orchestrator | Friday 30 January 2026 06:50:13 +0000 (0:00:01.145) 1:02:07.326 ******** 2026-01-30 06:50:52.943763 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:50:52.943774 | orchestrator | 2026-01-30 06:50:52.943784 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:50:52.943795 | orchestrator | Friday 30 January 2026 06:50:18 +0000 (0:00:04.455) 1:02:11.781 ******** 2026-01-30 06:50:52.943806 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:50:52.943818 | orchestrator | 2026-01-30 06:50:52.943828 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:50:52.943849 | orchestrator | Friday 30 January 2026 06:50:19 +0000 (0:00:01.170) 1:02:12.952 ******** 2026-01-30 06:50:52.943862 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-01-30 06:50:52.943893 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-01-30 06:50:52.943905 | orchestrator | 2026-01-30 06:50:52.943916 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:50:52.943926 | orchestrator | Friday 30 January 2026 06:50:24 +0000 (0:00:05.067) 1:02:18.019 ******** 2026-01-30 06:50:52.943937 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.943948 | orchestrator | 2026-01-30 06:50:52.943959 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:50:52.943969 | orchestrator | Friday 30 January 2026 06:50:25 +0000 (0:00:01.106) 1:02:19.126 ******** 2026-01-30 06:50:52.943980 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944069 | orchestrator | 2026-01-30 06:50:52.944095 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:50:52.944139 | orchestrator | Friday 30 January 2026 06:50:26 +0000 (0:00:01.149) 1:02:20.276 ******** 2026-01-30 06:50:52.944152 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944163 | orchestrator | 2026-01-30 06:50:52.944174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:50:52.944185 | orchestrator | Friday 30 January 2026 06:50:27 +0000 (0:00:01.148) 1:02:21.424 ******** 2026-01-30 06:50:52.944195 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944206 | orchestrator | 2026-01-30 06:50:52.944217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:50:52.944228 | orchestrator | Friday 30 January 2026 06:50:29 +0000 (0:00:01.248) 1:02:22.672 ******** 2026-01-30 06:50:52.944238 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944249 | orchestrator | 2026-01-30 06:50:52.944260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:50:52.944270 | orchestrator | Friday 30 January 2026 06:50:30 +0000 (0:00:01.149) 1:02:23.821 ******** 2026-01-30 06:50:52.944281 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:52.944293 | orchestrator | 2026-01-30 06:50:52.944304 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:50:52.944314 | orchestrator | Friday 30 January 2026 06:50:31 +0000 (0:00:01.254) 1:02:25.075 ******** 2026-01-30 06:50:52.944325 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:50:52.944336 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:50:52.944347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:50:52.944358 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944368 | orchestrator | 2026-01-30 06:50:52.944379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:50:52.944390 | orchestrator | Friday 30 January 2026 06:50:33 +0000 (0:00:01.888) 1:02:26.964 ******** 2026-01-30 06:50:52.944401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:50:52.944411 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:50:52.944424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:50:52.944442 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944466 | orchestrator | 2026-01-30 06:50:52.944490 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:50:52.944516 | orchestrator | Friday 30 January 2026 06:50:35 +0000 (0:00:01.826) 1:02:28.790 ******** 2026-01-30 06:50:52.944533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-30 06:50:52.944551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-30 06:50:52.944566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-30 06:50:52.944583 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.944601 | orchestrator | 2026-01-30 06:50:52.944619 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:50:52.944637 | orchestrator | Friday 30 January 2026 06:50:37 +0000 (0:00:02.015) 1:02:30.806 ******** 2026-01-30 06:50:52.944655 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:52.944673 | orchestrator | 2026-01-30 06:50:52.944692 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:50:52.944705 | orchestrator | Friday 30 January 2026 06:50:38 +0000 (0:00:01.183) 1:02:31.989 ******** 2026-01-30 06:50:52.944716 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-30 06:50:52.944727 | orchestrator | 2026-01-30 06:50:52.944737 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:50:52.944748 | orchestrator | Friday 30 January 2026 06:50:39 +0000 (0:00:01.352) 1:02:33.342 ******** 2026-01-30 06:50:52.944760 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:52.944777 | orchestrator | 2026-01-30 06:50:52.944789 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-30 06:50:52.944799 | orchestrator | Friday 30 January 2026 06:50:41 +0000 (0:00:01.728) 1:02:35.071 ******** 2026-01-30 06:50:52.944810 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-01-30 06:50:52.944821 | orchestrator | 2026-01-30 06:50:52.944831 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 06:50:52.944842 | orchestrator | Friday 30 January 2026 06:50:42 +0000 (0:00:01.459) 1:02:36.530 ******** 2026-01-30 06:50:52.944853 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:50:52.944863 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:50:52.944874 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:50:52.944885 | orchestrator | 2026-01-30 06:50:52.944895 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:50:52.944906 | orchestrator | Friday 30 January 2026 06:50:46 +0000 (0:00:03.362) 1:02:39.893 ******** 2026-01-30 06:50:52.944917 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-01-30 06:50:52.944927 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-30 06:50:52.944938 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:50:52.944948 | orchestrator | 2026-01-30 06:50:52.944968 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-30 06:50:52.944979 | orchestrator | Friday 30 January 2026 06:50:48 +0000 (0:00:01.985) 1:02:41.878 ******** 2026-01-30 06:50:52.944989 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:50:52.945032 | orchestrator | 2026-01-30 06:50:52.945050 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-30 06:50:52.945061 | orchestrator | Friday 30 January 2026 06:50:49 +0000 (0:00:01.143) 1:02:43.021 ******** 2026-01-30 06:50:52.945072 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-01-30 06:50:52.945084 | orchestrator | 2026-01-30 06:50:52.945094 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-30 06:50:52.945105 | orchestrator | Friday 30 January 2026 06:50:50 +0000 (0:00:01.460) 1:02:44.482 ******** 2026-01-30 06:50:52.945127 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:52:09.266217 | orchestrator | 2026-01-30 06:52:09.266311 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-30 06:52:09.266341 | orchestrator | Friday 30 January 2026 06:50:52 +0000 (0:00:02.059) 1:02:46.542 ******** 2026-01-30 06:52:09.266349 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:52:09.266358 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 06:52:09.266365 | orchestrator | 2026-01-30 06:52:09.266372 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 06:52:09.266378 | orchestrator | Friday 30 January 2026 06:50:58 +0000 (0:00:05.527) 1:02:52.069 ******** 2026-01-30 06:52:09.266384 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:52:09.266392 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:52:09.266398 | orchestrator | 2026-01-30 06:52:09.266404 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:52:09.266410 | orchestrator | Friday 30 January 2026 06:51:01 +0000 (0:00:03.225) 1:02:55.295 ******** 2026-01-30 06:52:09.266417 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-01-30 06:52:09.266423 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:52:09.266432 | orchestrator | 2026-01-30 06:52:09.266442 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-30 06:52:09.266452 | orchestrator | Friday 30 January 2026 06:51:03 +0000 (0:00:02.008) 1:02:57.304 ******** 2026-01-30 06:52:09.266462 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-30 06:52:09.266472 | orchestrator | 2026-01-30 06:52:09.266482 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-30 06:52:09.266492 | orchestrator | Friday 30 January 2026 06:51:05 +0000 (0:00:01.495) 1:02:58.800 ******** 2026-01-30 06:52:09.266502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266577 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:52:09.266588 | orchestrator | 2026-01-30 06:52:09.266598 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-30 06:52:09.266608 | orchestrator | Friday 30 January 2026 06:51:06 +0000 (0:00:01.606) 1:03:00.406 ******** 2026-01-30 06:52:09.266618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:52:09.266671 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:52:09.266681 | orchestrator | 2026-01-30 06:52:09.266691 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-30 06:52:09.266711 | orchestrator | Friday 30 January 2026 06:51:08 +0000 (0:00:01.583) 1:03:01.989 ******** 2026-01-30 06:52:09.266722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:52:09.266749 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:52:09.266759 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:52:09.266769 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:52:09.266781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:52:09.266791 | orchestrator | 2026-01-30 06:52:09.266803 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-30 06:52:09.266833 | orchestrator | Friday 30 January 2026 06:51:42 +0000 (0:00:33.876) 1:03:35.866 ******** 2026-01-30 06:52:09.266845 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:52:09.266856 | orchestrator | 2026-01-30 06:52:09.266866 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-30 06:52:09.266875 | orchestrator | Friday 30 January 2026 06:51:43 +0000 (0:00:01.119) 1:03:36.985 ******** 2026-01-30 06:52:09.266885 | orchestrator | skipping: [testbed-node-3] 2026-01-30 06:52:09.266915 | orchestrator | 2026-01-30 06:52:09.266923 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-30 06:52:09.266930 | orchestrator | Friday 30 January 2026 06:51:44 +0000 (0:00:01.108) 1:03:38.094 ******** 2026-01-30 06:52:09.266938 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-01-30 06:52:09.266945 | orchestrator | 2026-01-30 06:52:09.266952 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-30 06:52:09.266959 | orchestrator | Friday 30 January 2026 06:51:45 +0000 (0:00:01.496) 1:03:39.590 ******** 2026-01-30 06:52:09.266967 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-01-30 06:52:09.266974 | orchestrator | 2026-01-30 06:52:09.266981 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-30 06:52:09.266988 | orchestrator | Friday 30 January 2026 06:51:47 +0000 (0:00:01.562) 1:03:41.153 ******** 2026-01-30 06:52:09.266995 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:52:09.267003 | orchestrator | 2026-01-30 06:52:09.267010 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-30 06:52:09.267020 | orchestrator | Friday 30 January 2026 06:51:49 +0000 (0:00:02.030) 1:03:43.184 ******** 2026-01-30 06:52:09.267030 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:52:09.267042 | orchestrator | 2026-01-30 06:52:09.267052 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-30 06:52:09.267062 | orchestrator | Friday 30 January 2026 06:51:51 +0000 (0:00:01.950) 1:03:45.134 ******** 2026-01-30 06:52:09.267072 | orchestrator | ok: [testbed-node-3] 2026-01-30 06:52:09.267082 | orchestrator | 2026-01-30 06:52:09.267092 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-30 06:52:09.267102 | orchestrator | Friday 30 January 2026 06:51:53 +0000 (0:00:02.219) 1:03:47.354 ******** 2026-01-30 06:52:09.267111 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-30 06:52:09.267121 | orchestrator | 2026-01-30 06:52:09.267131 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-01-30 06:52:09.267142 | orchestrator | 2026-01-30 06:52:09.267151 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:52:09.267161 | orchestrator | Friday 30 January 2026 06:51:56 +0000 (0:00:02.832) 1:03:50.187 ******** 2026-01-30 06:52:09.267181 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-01-30 06:52:09.267190 | orchestrator | 2026-01-30 06:52:09.267201 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:52:09.267210 | orchestrator | Friday 30 January 2026 06:51:57 +0000 (0:00:01.133) 1:03:51.320 ******** 2026-01-30 06:52:09.267220 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267230 | orchestrator | 2026-01-30 06:52:09.267240 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:52:09.267249 | orchestrator | Friday 30 January 2026 06:51:59 +0000 (0:00:01.478) 1:03:52.799 ******** 2026-01-30 06:52:09.267258 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267267 | orchestrator | 2026-01-30 06:52:09.267277 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:52:09.267286 | orchestrator | Friday 30 January 2026 06:52:00 +0000 (0:00:01.122) 1:03:53.922 ******** 2026-01-30 06:52:09.267296 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267306 | orchestrator | 2026-01-30 06:52:09.267316 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:52:09.267325 | orchestrator | Friday 30 January 2026 06:52:01 +0000 (0:00:01.454) 1:03:55.376 ******** 2026-01-30 06:52:09.267335 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267346 | orchestrator | 2026-01-30 06:52:09.267357 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:52:09.267368 | orchestrator | Friday 30 January 2026 06:52:02 +0000 (0:00:01.122) 1:03:56.498 ******** 2026-01-30 06:52:09.267379 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267390 | orchestrator | 2026-01-30 06:52:09.267401 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:52:09.267410 | orchestrator | Friday 30 January 2026 06:52:04 +0000 (0:00:01.207) 1:03:57.706 ******** 2026-01-30 06:52:09.267420 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267430 | orchestrator | 2026-01-30 06:52:09.267440 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:52:09.267450 | orchestrator | Friday 30 January 2026 06:52:05 +0000 (0:00:01.142) 1:03:58.849 ******** 2026-01-30 06:52:09.267461 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:09.267472 | orchestrator | 2026-01-30 06:52:09.267491 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:52:09.267502 | orchestrator | Friday 30 January 2026 06:52:06 +0000 (0:00:01.149) 1:03:59.999 ******** 2026-01-30 06:52:09.267512 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:09.267522 | orchestrator | 2026-01-30 06:52:09.267531 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:52:09.267541 | orchestrator | Friday 30 January 2026 06:52:07 +0000 (0:00:01.138) 1:04:01.138 ******** 2026-01-30 06:52:09.267551 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:52:09.267561 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:52:09.267571 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:52:09.267580 | orchestrator | 2026-01-30 06:52:09.267586 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:52:09.267607 | orchestrator | Friday 30 January 2026 06:52:09 +0000 (0:00:01.727) 1:04:02.865 ******** 2026-01-30 06:52:34.583587 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.583680 | orchestrator | 2026-01-30 06:52:34.583690 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:52:34.583698 | orchestrator | Friday 30 January 2026 06:52:10 +0000 (0:00:01.260) 1:04:04.126 ******** 2026-01-30 06:52:34.583704 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:52:34.583711 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:52:34.583736 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:52:34.583746 | orchestrator | 2026-01-30 06:52:34.583754 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:52:34.583763 | orchestrator | Friday 30 January 2026 06:52:13 +0000 (0:00:02.968) 1:04:07.094 ******** 2026-01-30 06:52:34.583773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 06:52:34.583782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 06:52:34.583790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 06:52:34.583799 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.583810 | orchestrator | 2026-01-30 06:52:34.583819 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:52:34.583828 | orchestrator | Friday 30 January 2026 06:52:14 +0000 (0:00:01.385) 1:04:08.480 ******** 2026-01-30 06:52:34.583839 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583850 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583883 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583894 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.583905 | orchestrator | 2026-01-30 06:52:34.583915 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:52:34.583925 | orchestrator | Friday 30 January 2026 06:52:16 +0000 (0:00:01.988) 1:04:10.469 ******** 2026-01-30 06:52:34.583937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:34.583964 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.583969 | orchestrator | 2026-01-30 06:52:34.583988 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:52:34.583993 | orchestrator | Friday 30 January 2026 06:52:17 +0000 (0:00:01.134) 1:04:11.604 ******** 2026-01-30 06:52:34.584017 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:52:11.089886', 'end': '2026-01-30 06:52:11.139719', 'delta': '0:00:00.049833', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:52:34.584042 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:52:11.682831', 'end': '2026-01-30 06:52:11.748353', 'delta': '0:00:00.065522', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:52:34.584052 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:52:12.276770', 'end': '2026-01-30 06:52:12.319638', 'delta': '0:00:00.042868', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:52:34.584062 | orchestrator | 2026-01-30 06:52:34.584071 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:52:34.584079 | orchestrator | Friday 30 January 2026 06:52:19 +0000 (0:00:01.213) 1:04:12.818 ******** 2026-01-30 06:52:34.584088 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.584096 | orchestrator | 2026-01-30 06:52:34.584104 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:52:34.584112 | orchestrator | Friday 30 January 2026 06:52:20 +0000 (0:00:01.253) 1:04:14.071 ******** 2026-01-30 06:52:34.584120 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584129 | orchestrator | 2026-01-30 06:52:34.584138 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:52:34.584148 | orchestrator | Friday 30 January 2026 06:52:22 +0000 (0:00:01.640) 1:04:15.712 ******** 2026-01-30 06:52:34.584157 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.584166 | orchestrator | 2026-01-30 06:52:34.584175 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:52:34.584183 | orchestrator | Friday 30 January 2026 06:52:23 +0000 (0:00:01.208) 1:04:16.921 ******** 2026-01-30 06:52:34.584192 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:52:34.584201 | orchestrator | 2026-01-30 06:52:34.584211 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:52:34.584221 | orchestrator | Friday 30 January 2026 06:52:25 +0000 (0:00:02.052) 1:04:18.973 ******** 2026-01-30 06:52:34.584231 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.584240 | orchestrator | 2026-01-30 06:52:34.584249 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:52:34.584259 | orchestrator | Friday 30 January 2026 06:52:26 +0000 (0:00:01.159) 1:04:20.133 ******** 2026-01-30 06:52:34.584269 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584278 | orchestrator | 2026-01-30 06:52:34.584287 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:52:34.584303 | orchestrator | Friday 30 January 2026 06:52:27 +0000 (0:00:01.095) 1:04:21.228 ******** 2026-01-30 06:52:34.584310 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584315 | orchestrator | 2026-01-30 06:52:34.584322 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:52:34.584328 | orchestrator | Friday 30 January 2026 06:52:28 +0000 (0:00:01.273) 1:04:22.502 ******** 2026-01-30 06:52:34.584334 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584341 | orchestrator | 2026-01-30 06:52:34.584347 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:52:34.584358 | orchestrator | Friday 30 January 2026 06:52:29 +0000 (0:00:01.100) 1:04:23.603 ******** 2026-01-30 06:52:34.584365 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584371 | orchestrator | 2026-01-30 06:52:34.584377 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:52:34.584384 | orchestrator | Friday 30 January 2026 06:52:31 +0000 (0:00:01.117) 1:04:24.721 ******** 2026-01-30 06:52:34.584390 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.584397 | orchestrator | 2026-01-30 06:52:34.584403 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:52:34.584409 | orchestrator | Friday 30 January 2026 06:52:32 +0000 (0:00:01.153) 1:04:25.874 ******** 2026-01-30 06:52:34.584415 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:34.584421 | orchestrator | 2026-01-30 06:52:34.584427 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:52:34.584433 | orchestrator | Friday 30 January 2026 06:52:33 +0000 (0:00:01.125) 1:04:26.999 ******** 2026-01-30 06:52:34.584440 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:34.584449 | orchestrator | 2026-01-30 06:52:34.584458 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:52:34.584475 | orchestrator | Friday 30 January 2026 06:52:34 +0000 (0:00:01.181) 1:04:28.181 ******** 2026-01-30 06:52:37.039251 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:37.039411 | orchestrator | 2026-01-30 06:52:37.039440 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:52:37.039461 | orchestrator | Friday 30 January 2026 06:52:35 +0000 (0:00:01.092) 1:04:29.274 ******** 2026-01-30 06:52:37.039480 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:37.039501 | orchestrator | 2026-01-30 06:52:37.039520 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:52:37.039539 | orchestrator | Friday 30 January 2026 06:52:36 +0000 (0:00:01.139) 1:04:30.414 ******** 2026-01-30 06:52:37.039561 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.039588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}})  2026-01-30 06:52:37.039612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:52:37.039701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}})  2026-01-30 06:52:37.039747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.039774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.039824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:52:37.039847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.039946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:52:37.039968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.040003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}})  2026-01-30 06:52:37.040034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}})  2026-01-30 06:52:37.040053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:37.040096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:52:38.813738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:38.813821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:52:38.813834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:52:38.813884 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:38.813894 | orchestrator | 2026-01-30 06:52:38.813918 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:52:38.813927 | orchestrator | Friday 30 January 2026 06:52:38 +0000 (0:00:01.360) 1:04:31.775 ******** 2026-01-30 06:52:38.813936 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.813946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939', 'dm-uuid-LVM-bke8hi7wEU6q40E0cPf6MXzsdp7aMlJNxxyYHDfpVDMw8d3rRNPrDRnSHBX3sjuf'], 'uuids': ['4c596dc9-de7b-46b7-a8b5-c464454d08c4'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.813955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c', 'scsi-SQEMU_QEMU_HARDDISK_b216a188-2311-40bc-9fb1-2473213c5e7c'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b216a188', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.813993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UAsjaQ-IFJs-SQpg-A63j-UM3T-eBmm-42ZEy1', 'scsi-0QEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea', 'scsi-SQEMU_QEMU_HARDDISK_61a881f5-0027-4515-8019-0b50414c8fea'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.814000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.814008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.814047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.814053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:38.814065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1', 'dm-uuid-CRYPT-LUKS2-bca425aa6a4f43fdae511aef4e3b3b2f-uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267-osd--block--3dd49c2b--59d1--5a3f--9cfa--a0fb165dd267', 'dm-uuid-LVM-whCpgf4p6oECdZb3eqzfS9DFJkv3keR5uOjcOqGDbQdeEt9lfxy38HKmxDAEeYV1'], 'uuids': ['bca425aa-6a4f-43fd-ae51-1aef4e3b3b2f'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '61a881f5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uOjcOq-GDbQ-deEt-9lfx-y38H-KmxD-AEeYV1']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-iHaIPb-Bb2H-eLK2-Iqn5-XQjN-E1m1-eIntoS', 'scsi-0QEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4', 'scsi-SQEMU_QEMU_HARDDISK_5df04f9b-dd43-4d22-91db-5ca8ef5423a4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5df04f9b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a1704272--fd93--5be5--acd9--a48498ed5939-osd--block--a1704272--fd93--5be5--acd9--a48498ed5939']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '288be04e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1', 'scsi-SQEMU_QEMU_HARDDISK_288be04e-f5c6-44d1-9ba7-92e7bdbdbceb-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452964 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.452993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf', 'dm-uuid-CRYPT-LUKS2-4c596dc9de7b46b7a8b5c464454d08c4-xxyYHD-fpVD-Mw8d-3rRN-PrDR-nSHB-X3sjuf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:52:44.453016 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:52:44.453029 | orchestrator | 2026-01-30 06:52:44.453041 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:52:44.453053 | orchestrator | Friday 30 January 2026 06:52:40 +0000 (0:00:02.199) 1:04:33.975 ******** 2026-01-30 06:52:44.453065 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:44.453083 | orchestrator | 2026-01-30 06:52:44.453103 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:52:44.453123 | orchestrator | Friday 30 January 2026 06:52:41 +0000 (0:00:01.486) 1:04:35.462 ******** 2026-01-30 06:52:44.453144 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:44.453163 | orchestrator | 2026-01-30 06:52:44.453179 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:52:44.453193 | orchestrator | Friday 30 January 2026 06:52:42 +0000 (0:00:01.105) 1:04:36.568 ******** 2026-01-30 06:52:44.453211 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:52:44.453231 | orchestrator | 2026-01-30 06:52:44.453248 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:52:44.453272 | orchestrator | Friday 30 January 2026 06:52:44 +0000 (0:00:01.488) 1:04:38.056 ******** 2026-01-30 06:53:26.326320 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.326539 | orchestrator | 2026-01-30 06:53:26.326570 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:53:26.326593 | orchestrator | Friday 30 January 2026 06:52:45 +0000 (0:00:01.114) 1:04:39.171 ******** 2026-01-30 06:53:26.326613 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.326633 | orchestrator | 2026-01-30 06:53:26.326653 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:53:26.326674 | orchestrator | Friday 30 January 2026 06:52:46 +0000 (0:00:01.229) 1:04:40.400 ******** 2026-01-30 06:53:26.326696 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.326721 | orchestrator | 2026-01-30 06:53:26.326744 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:53:26.326767 | orchestrator | Friday 30 January 2026 06:52:47 +0000 (0:00:01.130) 1:04:41.531 ******** 2026-01-30 06:53:26.326792 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-30 06:53:26.326841 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-30 06:53:26.326860 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-30 06:53:26.326882 | orchestrator | 2026-01-30 06:53:26.326903 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:53:26.326925 | orchestrator | Friday 30 January 2026 06:52:49 +0000 (0:00:01.972) 1:04:43.504 ******** 2026-01-30 06:53:26.326947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-30 06:53:26.326970 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-30 06:53:26.326992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-30 06:53:26.327013 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327031 | orchestrator | 2026-01-30 06:53:26.327051 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:53:26.327072 | orchestrator | Friday 30 January 2026 06:52:51 +0000 (0:00:01.152) 1:04:44.656 ******** 2026-01-30 06:53:26.327114 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-01-30 06:53:26.327228 | orchestrator | 2026-01-30 06:53:26.327253 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:53:26.327273 | orchestrator | Friday 30 January 2026 06:52:52 +0000 (0:00:01.126) 1:04:45.782 ******** 2026-01-30 06:53:26.327292 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327342 | orchestrator | 2026-01-30 06:53:26.327362 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:53:26.327380 | orchestrator | Friday 30 January 2026 06:52:53 +0000 (0:00:01.179) 1:04:46.962 ******** 2026-01-30 06:53:26.327397 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327415 | orchestrator | 2026-01-30 06:53:26.327433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:53:26.327449 | orchestrator | Friday 30 January 2026 06:52:54 +0000 (0:00:01.155) 1:04:48.117 ******** 2026-01-30 06:53:26.327465 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327482 | orchestrator | 2026-01-30 06:53:26.327498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:53:26.327515 | orchestrator | Friday 30 January 2026 06:52:55 +0000 (0:00:01.218) 1:04:49.336 ******** 2026-01-30 06:53:26.327533 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:53:26.327550 | orchestrator | 2026-01-30 06:53:26.327567 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:53:26.327583 | orchestrator | Friday 30 January 2026 06:52:56 +0000 (0:00:01.234) 1:04:50.571 ******** 2026-01-30 06:53:26.327600 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:53:26.327617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:53:26.327633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:53:26.327649 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327666 | orchestrator | 2026-01-30 06:53:26.327682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:53:26.327700 | orchestrator | Friday 30 January 2026 06:52:58 +0000 (0:00:01.429) 1:04:52.001 ******** 2026-01-30 06:53:26.327716 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:53:26.327732 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:53:26.327748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:53:26.327766 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327782 | orchestrator | 2026-01-30 06:53:26.327826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:53:26.327846 | orchestrator | Friday 30 January 2026 06:52:59 +0000 (0:00:01.428) 1:04:53.429 ******** 2026-01-30 06:53:26.327865 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:53:26.327884 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:53:26.327901 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:53:26.327918 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.327936 | orchestrator | 2026-01-30 06:53:26.327956 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:53:26.327976 | orchestrator | Friday 30 January 2026 06:53:01 +0000 (0:00:01.435) 1:04:54.865 ******** 2026-01-30 06:53:26.327995 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:53:26.328014 | orchestrator | 2026-01-30 06:53:26.328032 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:53:26.328049 | orchestrator | Friday 30 January 2026 06:53:02 +0000 (0:00:01.125) 1:04:55.991 ******** 2026-01-30 06:53:26.328064 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:53:26.328080 | orchestrator | 2026-01-30 06:53:26.328098 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:53:26.328116 | orchestrator | Friday 30 January 2026 06:53:03 +0000 (0:00:01.331) 1:04:57.322 ******** 2026-01-30 06:53:26.328165 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:53:26.328186 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:53:26.328205 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:53:26.328223 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:53:26.328260 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-01-30 06:53:26.328281 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:53:26.328298 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:53:26.328313 | orchestrator | 2026-01-30 06:53:26.328324 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:53:26.328335 | orchestrator | Friday 30 January 2026 06:53:05 +0000 (0:00:02.163) 1:04:59.486 ******** 2026-01-30 06:53:26.328346 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:53:26.328356 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:53:26.328367 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:53:26.328377 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:53:26.328387 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-01-30 06:53:26.328396 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-30 06:53:26.328406 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:53:26.328415 | orchestrator | 2026-01-30 06:53:26.328425 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-01-30 06:53:26.328442 | orchestrator | Friday 30 January 2026 06:53:08 +0000 (0:00:02.251) 1:05:01.738 ******** 2026-01-30 06:53:26.328452 | orchestrator | changed: [testbed-node-4] 2026-01-30 06:53:26.328462 | orchestrator | 2026-01-30 06:53:26.328472 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-01-30 06:53:26.328481 | orchestrator | Friday 30 January 2026 06:53:10 +0000 (0:00:02.052) 1:05:03.790 ******** 2026-01-30 06:53:26.328491 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:53:26.328501 | orchestrator | 2026-01-30 06:53:26.328511 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-01-30 06:53:26.328520 | orchestrator | Friday 30 January 2026 06:53:12 +0000 (0:00:02.601) 1:05:06.392 ******** 2026-01-30 06:53:26.328530 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:53:26.328539 | orchestrator | 2026-01-30 06:53:26.328549 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:53:26.328558 | orchestrator | Friday 30 January 2026 06:53:14 +0000 (0:00:02.009) 1:05:08.401 ******** 2026-01-30 06:53:26.328567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-01-30 06:53:26.328577 | orchestrator | 2026-01-30 06:53:26.328587 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:53:26.328596 | orchestrator | Friday 30 January 2026 06:53:16 +0000 (0:00:01.313) 1:05:09.715 ******** 2026-01-30 06:53:26.328605 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-01-30 06:53:26.328615 | orchestrator | 2026-01-30 06:53:26.328625 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:53:26.328634 | orchestrator | Friday 30 January 2026 06:53:17 +0000 (0:00:01.119) 1:05:10.834 ******** 2026-01-30 06:53:26.328643 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.328653 | orchestrator | 2026-01-30 06:53:26.328662 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:53:26.328671 | orchestrator | Friday 30 January 2026 06:53:18 +0000 (0:00:01.133) 1:05:11.967 ******** 2026-01-30 06:53:26.328681 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:53:26.328690 | orchestrator | 2026-01-30 06:53:26.328700 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:53:26.328709 | orchestrator | Friday 30 January 2026 06:53:19 +0000 (0:00:01.504) 1:05:13.472 ******** 2026-01-30 06:53:26.328727 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:53:26.328738 | orchestrator | 2026-01-30 06:53:26.328754 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:53:26.328769 | orchestrator | Friday 30 January 2026 06:53:21 +0000 (0:00:01.603) 1:05:15.075 ******** 2026-01-30 06:53:26.328783 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:53:26.328840 | orchestrator | 2026-01-30 06:53:26.328857 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:53:26.328872 | orchestrator | Friday 30 January 2026 06:53:22 +0000 (0:00:01.525) 1:05:16.601 ******** 2026-01-30 06:53:26.328887 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.328903 | orchestrator | 2026-01-30 06:53:26.328918 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:53:26.328933 | orchestrator | Friday 30 January 2026 06:53:24 +0000 (0:00:01.095) 1:05:17.697 ******** 2026-01-30 06:53:26.328949 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.328964 | orchestrator | 2026-01-30 06:53:26.328980 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:53:26.328996 | orchestrator | Friday 30 January 2026 06:53:25 +0000 (0:00:01.108) 1:05:18.805 ******** 2026-01-30 06:53:26.329012 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:53:26.329029 | orchestrator | 2026-01-30 06:53:26.329046 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:53:26.329073 | orchestrator | Friday 30 January 2026 06:53:26 +0000 (0:00:01.117) 1:05:19.923 ******** 2026-01-30 06:54:06.568842 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.568993 | orchestrator | 2026-01-30 06:54:06.569011 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:54:06.569025 | orchestrator | Friday 30 January 2026 06:53:27 +0000 (0:00:01.566) 1:05:21.489 ******** 2026-01-30 06:54:06.569036 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569048 | orchestrator | 2026-01-30 06:54:06.569059 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:54:06.569070 | orchestrator | Friday 30 January 2026 06:53:29 +0000 (0:00:01.632) 1:05:23.121 ******** 2026-01-30 06:54:06.569082 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569094 | orchestrator | 2026-01-30 06:54:06.569105 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:54:06.569116 | orchestrator | Friday 30 January 2026 06:53:30 +0000 (0:00:00.805) 1:05:23.927 ******** 2026-01-30 06:54:06.569127 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569138 | orchestrator | 2026-01-30 06:54:06.569149 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:54:06.569160 | orchestrator | Friday 30 January 2026 06:53:31 +0000 (0:00:00.779) 1:05:24.706 ******** 2026-01-30 06:54:06.569171 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569181 | orchestrator | 2026-01-30 06:54:06.569192 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:54:06.569203 | orchestrator | Friday 30 January 2026 06:53:31 +0000 (0:00:00.820) 1:05:25.527 ******** 2026-01-30 06:54:06.569214 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569225 | orchestrator | 2026-01-30 06:54:06.569236 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:54:06.569247 | orchestrator | Friday 30 January 2026 06:53:32 +0000 (0:00:00.821) 1:05:26.348 ******** 2026-01-30 06:54:06.569257 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569268 | orchestrator | 2026-01-30 06:54:06.569279 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:54:06.569307 | orchestrator | Friday 30 January 2026 06:53:33 +0000 (0:00:00.821) 1:05:27.170 ******** 2026-01-30 06:54:06.569318 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569329 | orchestrator | 2026-01-30 06:54:06.569340 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:54:06.569351 | orchestrator | Friday 30 January 2026 06:53:34 +0000 (0:00:00.769) 1:05:27.939 ******** 2026-01-30 06:54:06.569386 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569397 | orchestrator | 2026-01-30 06:54:06.569408 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:54:06.569419 | orchestrator | Friday 30 January 2026 06:53:35 +0000 (0:00:00.755) 1:05:28.695 ******** 2026-01-30 06:54:06.569430 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569441 | orchestrator | 2026-01-30 06:54:06.569452 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:54:06.569462 | orchestrator | Friday 30 January 2026 06:53:35 +0000 (0:00:00.786) 1:05:29.482 ******** 2026-01-30 06:54:06.569473 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569484 | orchestrator | 2026-01-30 06:54:06.569494 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:54:06.569505 | orchestrator | Friday 30 January 2026 06:53:36 +0000 (0:00:00.801) 1:05:30.284 ******** 2026-01-30 06:54:06.569515 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.569526 | orchestrator | 2026-01-30 06:54:06.569537 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:54:06.569548 | orchestrator | Friday 30 January 2026 06:53:37 +0000 (0:00:00.799) 1:05:31.083 ******** 2026-01-30 06:54:06.569558 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569569 | orchestrator | 2026-01-30 06:54:06.569579 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:54:06.569590 | orchestrator | Friday 30 January 2026 06:53:38 +0000 (0:00:00.786) 1:05:31.869 ******** 2026-01-30 06:54:06.569601 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569612 | orchestrator | 2026-01-30 06:54:06.569622 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:54:06.569633 | orchestrator | Friday 30 January 2026 06:53:39 +0000 (0:00:00.770) 1:05:32.640 ******** 2026-01-30 06:54:06.569644 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569654 | orchestrator | 2026-01-30 06:54:06.569665 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:54:06.569676 | orchestrator | Friday 30 January 2026 06:53:39 +0000 (0:00:00.861) 1:05:33.502 ******** 2026-01-30 06:54:06.569686 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569697 | orchestrator | 2026-01-30 06:54:06.569708 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:54:06.569719 | orchestrator | Friday 30 January 2026 06:53:40 +0000 (0:00:00.769) 1:05:34.271 ******** 2026-01-30 06:54:06.569729 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569740 | orchestrator | 2026-01-30 06:54:06.569778 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:54:06.569790 | orchestrator | Friday 30 January 2026 06:53:41 +0000 (0:00:00.743) 1:05:35.015 ******** 2026-01-30 06:54:06.569801 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569812 | orchestrator | 2026-01-30 06:54:06.569824 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:54:06.569834 | orchestrator | Friday 30 January 2026 06:53:42 +0000 (0:00:00.793) 1:05:35.809 ******** 2026-01-30 06:54:06.569845 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569856 | orchestrator | 2026-01-30 06:54:06.569867 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:54:06.569878 | orchestrator | Friday 30 January 2026 06:53:42 +0000 (0:00:00.777) 1:05:36.586 ******** 2026-01-30 06:54:06.569889 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569900 | orchestrator | 2026-01-30 06:54:06.569910 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:54:06.569921 | orchestrator | Friday 30 January 2026 06:53:43 +0000 (0:00:00.820) 1:05:37.407 ******** 2026-01-30 06:54:06.569932 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.569943 | orchestrator | 2026-01-30 06:54:06.569973 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:54:06.569984 | orchestrator | Friday 30 January 2026 06:53:44 +0000 (0:00:00.789) 1:05:38.196 ******** 2026-01-30 06:54:06.570003 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570078 | orchestrator | 2026-01-30 06:54:06.570099 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:54:06.570111 | orchestrator | Friday 30 January 2026 06:53:45 +0000 (0:00:00.741) 1:05:38.938 ******** 2026-01-30 06:54:06.570122 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570133 | orchestrator | 2026-01-30 06:54:06.570143 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:54:06.570154 | orchestrator | Friday 30 January 2026 06:53:46 +0000 (0:00:00.765) 1:05:39.703 ******** 2026-01-30 06:54:06.570165 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570176 | orchestrator | 2026-01-30 06:54:06.570187 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:54:06.570197 | orchestrator | Friday 30 January 2026 06:53:46 +0000 (0:00:00.749) 1:05:40.453 ******** 2026-01-30 06:54:06.570208 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.570219 | orchestrator | 2026-01-30 06:54:06.570229 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:54:06.570240 | orchestrator | Friday 30 January 2026 06:53:48 +0000 (0:00:01.600) 1:05:42.053 ******** 2026-01-30 06:54:06.570251 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.570262 | orchestrator | 2026-01-30 06:54:06.570273 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:54:06.570283 | orchestrator | Friday 30 January 2026 06:53:50 +0000 (0:00:02.078) 1:05:44.131 ******** 2026-01-30 06:54:06.570294 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-01-30 06:54:06.570306 | orchestrator | 2026-01-30 06:54:06.570317 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:54:06.570334 | orchestrator | Friday 30 January 2026 06:53:51 +0000 (0:00:01.300) 1:05:45.432 ******** 2026-01-30 06:54:06.570345 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570356 | orchestrator | 2026-01-30 06:54:06.570366 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:54:06.570377 | orchestrator | Friday 30 January 2026 06:53:52 +0000 (0:00:01.112) 1:05:46.544 ******** 2026-01-30 06:54:06.570388 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570399 | orchestrator | 2026-01-30 06:54:06.570409 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:54:06.570420 | orchestrator | Friday 30 January 2026 06:53:54 +0000 (0:00:01.125) 1:05:47.670 ******** 2026-01-30 06:54:06.570431 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:54:06.570442 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:54:06.570453 | orchestrator | 2026-01-30 06:54:06.570463 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:54:06.570474 | orchestrator | Friday 30 January 2026 06:53:55 +0000 (0:00:01.854) 1:05:49.525 ******** 2026-01-30 06:54:06.570485 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.570496 | orchestrator | 2026-01-30 06:54:06.570506 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:54:06.570517 | orchestrator | Friday 30 January 2026 06:53:57 +0000 (0:00:01.476) 1:05:51.002 ******** 2026-01-30 06:54:06.570528 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570538 | orchestrator | 2026-01-30 06:54:06.570549 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:54:06.570559 | orchestrator | Friday 30 January 2026 06:53:58 +0000 (0:00:01.175) 1:05:52.177 ******** 2026-01-30 06:54:06.570570 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570581 | orchestrator | 2026-01-30 06:54:06.570592 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:54:06.570602 | orchestrator | Friday 30 January 2026 06:53:59 +0000 (0:00:00.832) 1:05:53.009 ******** 2026-01-30 06:54:06.570621 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570632 | orchestrator | 2026-01-30 06:54:06.570643 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:54:06.570653 | orchestrator | Friday 30 January 2026 06:54:00 +0000 (0:00:00.746) 1:05:53.756 ******** 2026-01-30 06:54:06.570664 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-01-30 06:54:06.570675 | orchestrator | 2026-01-30 06:54:06.570685 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:54:06.570696 | orchestrator | Friday 30 January 2026 06:54:01 +0000 (0:00:01.088) 1:05:54.845 ******** 2026-01-30 06:54:06.570707 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:06.570717 | orchestrator | 2026-01-30 06:54:06.570728 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:54:06.570739 | orchestrator | Friday 30 January 2026 06:54:02 +0000 (0:00:01.753) 1:05:56.599 ******** 2026-01-30 06:54:06.570773 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:54:06.570785 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:54:06.570796 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:54:06.570807 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570818 | orchestrator | 2026-01-30 06:54:06.570829 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:54:06.570840 | orchestrator | Friday 30 January 2026 06:54:04 +0000 (0:00:01.166) 1:05:57.766 ******** 2026-01-30 06:54:06.570851 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570865 | orchestrator | 2026-01-30 06:54:06.570882 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:54:06.570901 | orchestrator | Friday 30 January 2026 06:54:05 +0000 (0:00:01.122) 1:05:58.888 ******** 2026-01-30 06:54:06.570918 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:06.570937 | orchestrator | 2026-01-30 06:54:06.570959 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:54:49.494908 | orchestrator | Friday 30 January 2026 06:54:06 +0000 (0:00:01.281) 1:06:00.170 ******** 2026-01-30 06:54:49.495053 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495080 | orchestrator | 2026-01-30 06:54:49.495100 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:54:49.495119 | orchestrator | Friday 30 January 2026 06:54:07 +0000 (0:00:01.140) 1:06:01.311 ******** 2026-01-30 06:54:49.495138 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495156 | orchestrator | 2026-01-30 06:54:49.495176 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:54:49.495194 | orchestrator | Friday 30 January 2026 06:54:08 +0000 (0:00:01.189) 1:06:02.500 ******** 2026-01-30 06:54:49.495213 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495233 | orchestrator | 2026-01-30 06:54:49.495250 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:54:49.495268 | orchestrator | Friday 30 January 2026 06:54:09 +0000 (0:00:00.788) 1:06:03.289 ******** 2026-01-30 06:54:49.495287 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:49.495306 | orchestrator | 2026-01-30 06:54:49.495324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:54:49.495343 | orchestrator | Friday 30 January 2026 06:54:11 +0000 (0:00:02.215) 1:06:05.505 ******** 2026-01-30 06:54:49.495363 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:49.495382 | orchestrator | 2026-01-30 06:54:49.495401 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:54:49.495420 | orchestrator | Friday 30 January 2026 06:54:12 +0000 (0:00:00.764) 1:06:06.270 ******** 2026-01-30 06:54:49.495439 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-01-30 06:54:49.495457 | orchestrator | 2026-01-30 06:54:49.495477 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:54:49.495547 | orchestrator | Friday 30 January 2026 06:54:13 +0000 (0:00:01.098) 1:06:07.368 ******** 2026-01-30 06:54:49.495568 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495589 | orchestrator | 2026-01-30 06:54:49.495609 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:54:49.495630 | orchestrator | Friday 30 January 2026 06:54:14 +0000 (0:00:01.149) 1:06:08.518 ******** 2026-01-30 06:54:49.495648 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495667 | orchestrator | 2026-01-30 06:54:49.495686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:54:49.495729 | orchestrator | Friday 30 January 2026 06:54:16 +0000 (0:00:01.128) 1:06:09.646 ******** 2026-01-30 06:54:49.495748 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495767 | orchestrator | 2026-01-30 06:54:49.495784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:54:49.495803 | orchestrator | Friday 30 January 2026 06:54:17 +0000 (0:00:01.146) 1:06:10.793 ******** 2026-01-30 06:54:49.495822 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495839 | orchestrator | 2026-01-30 06:54:49.495858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:54:49.495876 | orchestrator | Friday 30 January 2026 06:54:18 +0000 (0:00:01.134) 1:06:11.928 ******** 2026-01-30 06:54:49.495895 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495913 | orchestrator | 2026-01-30 06:54:49.495932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:54:49.495951 | orchestrator | Friday 30 January 2026 06:54:19 +0000 (0:00:01.144) 1:06:13.072 ******** 2026-01-30 06:54:49.495969 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.495987 | orchestrator | 2026-01-30 06:54:49.496006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:54:49.496025 | orchestrator | Friday 30 January 2026 06:54:20 +0000 (0:00:01.184) 1:06:14.256 ******** 2026-01-30 06:54:49.496043 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.496061 | orchestrator | 2026-01-30 06:54:49.496080 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:54:49.496098 | orchestrator | Friday 30 January 2026 06:54:21 +0000 (0:00:01.140) 1:06:15.397 ******** 2026-01-30 06:54:49.496116 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.496135 | orchestrator | 2026-01-30 06:54:49.496153 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:54:49.496171 | orchestrator | Friday 30 January 2026 06:54:22 +0000 (0:00:01.128) 1:06:16.526 ******** 2026-01-30 06:54:49.496189 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:54:49.496207 | orchestrator | 2026-01-30 06:54:49.496225 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:54:49.496244 | orchestrator | Friday 30 January 2026 06:54:23 +0000 (0:00:00.808) 1:06:17.334 ******** 2026-01-30 06:54:49.496262 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-01-30 06:54:49.496281 | orchestrator | 2026-01-30 06:54:49.496299 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:54:49.496318 | orchestrator | Friday 30 January 2026 06:54:24 +0000 (0:00:01.115) 1:06:18.450 ******** 2026-01-30 06:54:49.496336 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-01-30 06:54:49.496356 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-30 06:54:49.496374 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-30 06:54:49.496394 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-30 06:54:49.496414 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-30 06:54:49.496433 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-30 06:54:49.496452 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-30 06:54:49.496472 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:54:49.496504 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:54:49.496524 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:54:49.496542 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:54:49.496587 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:54:49.496608 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:54:49.496627 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:54:49.496645 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-01-30 06:54:49.496663 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-01-30 06:54:49.496680 | orchestrator | 2026-01-30 06:54:49.496728 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:54:49.496748 | orchestrator | Friday 30 January 2026 06:54:31 +0000 (0:00:06.697) 1:06:25.147 ******** 2026-01-30 06:54:49.496767 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-01-30 06:54:49.496786 | orchestrator | 2026-01-30 06:54:49.496807 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:54:49.496825 | orchestrator | Friday 30 January 2026 06:54:32 +0000 (0:00:01.116) 1:06:26.263 ******** 2026-01-30 06:54:49.496844 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:54:49.496863 | orchestrator | 2026-01-30 06:54:49.496878 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:54:49.496895 | orchestrator | Friday 30 January 2026 06:54:34 +0000 (0:00:01.494) 1:06:27.757 ******** 2026-01-30 06:54:49.496913 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:54:49.496933 | orchestrator | 2026-01-30 06:54:49.496952 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:54:49.496982 | orchestrator | Friday 30 January 2026 06:54:35 +0000 (0:00:01.726) 1:06:29.484 ******** 2026-01-30 06:54:49.497001 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497019 | orchestrator | 2026-01-30 06:54:49.497038 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:54:49.497055 | orchestrator | Friday 30 January 2026 06:54:36 +0000 (0:00:00.756) 1:06:30.241 ******** 2026-01-30 06:54:49.497074 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497092 | orchestrator | 2026-01-30 06:54:49.497110 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:54:49.497129 | orchestrator | Friday 30 January 2026 06:54:37 +0000 (0:00:00.831) 1:06:31.072 ******** 2026-01-30 06:54:49.497144 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497163 | orchestrator | 2026-01-30 06:54:49.497186 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:54:49.497211 | orchestrator | Friday 30 January 2026 06:54:38 +0000 (0:00:00.768) 1:06:31.841 ******** 2026-01-30 06:54:49.497229 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497246 | orchestrator | 2026-01-30 06:54:49.497264 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:54:49.497280 | orchestrator | Friday 30 January 2026 06:54:39 +0000 (0:00:00.778) 1:06:32.619 ******** 2026-01-30 06:54:49.497297 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497314 | orchestrator | 2026-01-30 06:54:49.497330 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:54:49.497347 | orchestrator | Friday 30 January 2026 06:54:39 +0000 (0:00:00.769) 1:06:33.389 ******** 2026-01-30 06:54:49.497363 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497378 | orchestrator | 2026-01-30 06:54:49.497392 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:54:49.497407 | orchestrator | Friday 30 January 2026 06:54:40 +0000 (0:00:00.803) 1:06:34.192 ******** 2026-01-30 06:54:49.497438 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497454 | orchestrator | 2026-01-30 06:54:49.497469 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:54:49.497484 | orchestrator | Friday 30 January 2026 06:54:41 +0000 (0:00:00.772) 1:06:34.966 ******** 2026-01-30 06:54:49.497499 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497514 | orchestrator | 2026-01-30 06:54:49.497530 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:54:49.497547 | orchestrator | Friday 30 January 2026 06:54:42 +0000 (0:00:00.770) 1:06:35.736 ******** 2026-01-30 06:54:49.497561 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497578 | orchestrator | 2026-01-30 06:54:49.497594 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:54:49.497609 | orchestrator | Friday 30 January 2026 06:54:42 +0000 (0:00:00.777) 1:06:36.514 ******** 2026-01-30 06:54:49.497626 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497642 | orchestrator | 2026-01-30 06:54:49.497658 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:54:49.497669 | orchestrator | Friday 30 January 2026 06:54:43 +0000 (0:00:00.777) 1:06:37.291 ******** 2026-01-30 06:54:49.497679 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:54:49.497688 | orchestrator | 2026-01-30 06:54:49.497729 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:54:49.497740 | orchestrator | Friday 30 January 2026 06:54:44 +0000 (0:00:00.814) 1:06:38.106 ******** 2026-01-30 06:54:49.497750 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:54:49.497759 | orchestrator | 2026-01-30 06:54:49.497768 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:54:49.497778 | orchestrator | Friday 30 January 2026 06:54:48 +0000 (0:00:04.183) 1:06:42.290 ******** 2026-01-30 06:54:49.497788 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:54:49.497798 | orchestrator | 2026-01-30 06:54:49.497823 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:55:30.743062 | orchestrator | Friday 30 January 2026 06:54:49 +0000 (0:00:00.804) 1:06:43.095 ******** 2026-01-30 06:55:30.743179 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-01-30 06:55:30.743198 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-01-30 06:55:30.743211 | orchestrator | 2026-01-30 06:55:30.743223 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:55:30.743234 | orchestrator | Friday 30 January 2026 06:54:54 +0000 (0:00:04.968) 1:06:48.064 ******** 2026-01-30 06:55:30.743245 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743257 | orchestrator | 2026-01-30 06:55:30.743268 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:55:30.743279 | orchestrator | Friday 30 January 2026 06:54:55 +0000 (0:00:00.840) 1:06:48.905 ******** 2026-01-30 06:55:30.743290 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743300 | orchestrator | 2026-01-30 06:55:30.743328 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:55:30.743342 | orchestrator | Friday 30 January 2026 06:54:56 +0000 (0:00:00.775) 1:06:49.680 ******** 2026-01-30 06:55:30.743374 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743386 | orchestrator | 2026-01-30 06:55:30.743397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:55:30.743407 | orchestrator | Friday 30 January 2026 06:54:56 +0000 (0:00:00.795) 1:06:50.476 ******** 2026-01-30 06:55:30.743418 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743429 | orchestrator | 2026-01-30 06:55:30.743439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:55:30.743450 | orchestrator | Friday 30 January 2026 06:54:57 +0000 (0:00:00.818) 1:06:51.294 ******** 2026-01-30 06:55:30.743461 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743472 | orchestrator | 2026-01-30 06:55:30.743483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:55:30.743493 | orchestrator | Friday 30 January 2026 06:54:58 +0000 (0:00:00.810) 1:06:52.105 ******** 2026-01-30 06:55:30.743504 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:55:30.743516 | orchestrator | 2026-01-30 06:55:30.743527 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:55:30.743537 | orchestrator | Friday 30 January 2026 06:54:59 +0000 (0:00:00.920) 1:06:53.026 ******** 2026-01-30 06:55:30.743548 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:55:30.743559 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:55:30.743569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:55:30.743580 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743591 | orchestrator | 2026-01-30 06:55:30.743601 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:55:30.743612 | orchestrator | Friday 30 January 2026 06:55:00 +0000 (0:00:01.072) 1:06:54.098 ******** 2026-01-30 06:55:30.743622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:55:30.743633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:55:30.743643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:55:30.743680 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743692 | orchestrator | 2026-01-30 06:55:30.743703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:55:30.743714 | orchestrator | Friday 30 January 2026 06:55:01 +0000 (0:00:01.085) 1:06:55.183 ******** 2026-01-30 06:55:30.743724 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-30 06:55:30.743747 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-30 06:55:30.743759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-30 06:55:30.743779 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.743790 | orchestrator | 2026-01-30 06:55:30.743801 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:55:30.743812 | orchestrator | Friday 30 January 2026 06:55:02 +0000 (0:00:01.030) 1:06:56.214 ******** 2026-01-30 06:55:30.743822 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:55:30.743833 | orchestrator | 2026-01-30 06:55:30.743843 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:55:30.743854 | orchestrator | Friday 30 January 2026 06:55:03 +0000 (0:00:00.797) 1:06:57.012 ******** 2026-01-30 06:55:30.743865 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-30 06:55:30.743875 | orchestrator | 2026-01-30 06:55:30.743886 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 06:55:30.743897 | orchestrator | Friday 30 January 2026 06:55:04 +0000 (0:00:00.993) 1:06:58.006 ******** 2026-01-30 06:55:30.743907 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:55:30.743918 | orchestrator | 2026-01-30 06:55:30.743929 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-30 06:55:30.743939 | orchestrator | Friday 30 January 2026 06:55:05 +0000 (0:00:01.415) 1:06:59.422 ******** 2026-01-30 06:55:30.743950 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-01-30 06:55:30.743969 | orchestrator | 2026-01-30 06:55:30.743997 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 06:55:30.744008 | orchestrator | Friday 30 January 2026 06:55:07 +0000 (0:00:01.288) 1:07:00.710 ******** 2026-01-30 06:55:30.744019 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:55:30.744030 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 06:55:30.744041 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:55:30.744051 | orchestrator | 2026-01-30 06:55:30.744062 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:55:30.744072 | orchestrator | Friday 30 January 2026 06:55:10 +0000 (0:00:03.304) 1:07:04.015 ******** 2026-01-30 06:55:30.744083 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-01-30 06:55:30.744094 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-30 06:55:30.744105 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:55:30.744115 | orchestrator | 2026-01-30 06:55:30.744126 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-30 06:55:30.744137 | orchestrator | Friday 30 January 2026 06:55:12 +0000 (0:00:01.939) 1:07:05.955 ******** 2026-01-30 06:55:30.744147 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.744158 | orchestrator | 2026-01-30 06:55:30.744169 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-30 06:55:30.744179 | orchestrator | Friday 30 January 2026 06:55:13 +0000 (0:00:00.812) 1:07:06.767 ******** 2026-01-30 06:55:30.744190 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-01-30 06:55:30.744201 | orchestrator | 2026-01-30 06:55:30.744212 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-30 06:55:30.744222 | orchestrator | Friday 30 January 2026 06:55:14 +0000 (0:00:01.089) 1:07:07.857 ******** 2026-01-30 06:55:30.744239 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:55:30.744251 | orchestrator | 2026-01-30 06:55:30.744261 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-30 06:55:30.744272 | orchestrator | Friday 30 January 2026 06:55:15 +0000 (0:00:01.616) 1:07:09.473 ******** 2026-01-30 06:55:30.744283 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:55:30.744293 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 06:55:30.744304 | orchestrator | 2026-01-30 06:55:30.744315 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 06:55:30.744325 | orchestrator | Friday 30 January 2026 06:55:21 +0000 (0:00:05.471) 1:07:14.945 ******** 2026-01-30 06:55:30.744336 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 06:55:30.744347 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 06:55:30.744357 | orchestrator | 2026-01-30 06:55:30.744368 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 06:55:30.744379 | orchestrator | Friday 30 January 2026 06:55:24 +0000 (0:00:03.130) 1:07:18.075 ******** 2026-01-30 06:55:30.744389 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-01-30 06:55:30.744400 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:55:30.744411 | orchestrator | 2026-01-30 06:55:30.744421 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-30 06:55:30.744432 | orchestrator | Friday 30 January 2026 06:55:26 +0000 (0:00:01.666) 1:07:19.742 ******** 2026-01-30 06:55:30.744442 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-01-30 06:55:30.744453 | orchestrator | 2026-01-30 06:55:30.744464 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-30 06:55:30.744481 | orchestrator | Friday 30 January 2026 06:55:27 +0000 (0:00:01.279) 1:07:21.021 ******** 2026-01-30 06:55:30.744492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744547 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:55:30.744557 | orchestrator | 2026-01-30 06:55:30.744568 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-30 06:55:30.744579 | orchestrator | Friday 30 January 2026 06:55:28 +0000 (0:00:01.585) 1:07:22.606 ******** 2026-01-30 06:55:30.744589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:55:30.744627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:56:39.355263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 06:56:39.355459 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:56:39.355490 | orchestrator | 2026-01-30 06:56:39.355513 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-30 06:56:39.355535 | orchestrator | Friday 30 January 2026 06:55:30 +0000 (0:00:01.733) 1:07:24.340 ******** 2026-01-30 06:56:39.355556 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:56:39.355578 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:56:39.355629 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:56:39.355649 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:56:39.355671 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 06:56:39.355689 | orchestrator | 2026-01-30 06:56:39.355709 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-30 06:56:39.355728 | orchestrator | Friday 30 January 2026 06:56:04 +0000 (0:00:33.604) 1:07:57.944 ******** 2026-01-30 06:56:39.355747 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:56:39.355767 | orchestrator | 2026-01-30 06:56:39.355787 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-30 06:56:39.355806 | orchestrator | Friday 30 January 2026 06:56:05 +0000 (0:00:00.754) 1:07:58.699 ******** 2026-01-30 06:56:39.355826 | orchestrator | skipping: [testbed-node-4] 2026-01-30 06:56:39.355852 | orchestrator | 2026-01-30 06:56:39.355865 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-30 06:56:39.355911 | orchestrator | Friday 30 January 2026 06:56:05 +0000 (0:00:00.756) 1:07:59.455 ******** 2026-01-30 06:56:39.355924 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-01-30 06:56:39.355938 | orchestrator | 2026-01-30 06:56:39.355951 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-30 06:56:39.355963 | orchestrator | Friday 30 January 2026 06:56:06 +0000 (0:00:01.100) 1:08:00.555 ******** 2026-01-30 06:56:39.355975 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-01-30 06:56:39.355989 | orchestrator | 2026-01-30 06:56:39.356002 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-30 06:56:39.356014 | orchestrator | Friday 30 January 2026 06:56:08 +0000 (0:00:01.097) 1:08:01.652 ******** 2026-01-30 06:56:39.356027 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:56:39.356040 | orchestrator | 2026-01-30 06:56:39.356053 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-30 06:56:39.356066 | orchestrator | Friday 30 January 2026 06:56:10 +0000 (0:00:02.150) 1:08:03.803 ******** 2026-01-30 06:56:39.356078 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:56:39.356090 | orchestrator | 2026-01-30 06:56:39.356102 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-30 06:56:39.356114 | orchestrator | Friday 30 January 2026 06:56:12 +0000 (0:00:01.989) 1:08:05.793 ******** 2026-01-30 06:56:39.356127 | orchestrator | ok: [testbed-node-4] 2026-01-30 06:56:39.356140 | orchestrator | 2026-01-30 06:56:39.356151 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-30 06:56:39.356161 | orchestrator | Friday 30 January 2026 06:56:14 +0000 (0:00:02.292) 1:08:08.086 ******** 2026-01-30 06:56:39.356172 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-30 06:56:39.356183 | orchestrator | 2026-01-30 06:56:39.356193 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-01-30 06:56:39.356204 | orchestrator | 2026-01-30 06:56:39.356222 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 06:56:39.356241 | orchestrator | Friday 30 January 2026 06:56:17 +0000 (0:00:03.403) 1:08:11.490 ******** 2026-01-30 06:56:39.356261 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-01-30 06:56:39.356280 | orchestrator | 2026-01-30 06:56:39.356298 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-30 06:56:39.356317 | orchestrator | Friday 30 January 2026 06:56:18 +0000 (0:00:01.116) 1:08:12.606 ******** 2026-01-30 06:56:39.356337 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356357 | orchestrator | 2026-01-30 06:56:39.356377 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-30 06:56:39.356397 | orchestrator | Friday 30 January 2026 06:56:20 +0000 (0:00:01.463) 1:08:14.070 ******** 2026-01-30 06:56:39.356417 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356429 | orchestrator | 2026-01-30 06:56:39.356439 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 06:56:39.356451 | orchestrator | Friday 30 January 2026 06:56:21 +0000 (0:00:01.113) 1:08:15.183 ******** 2026-01-30 06:56:39.356461 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356472 | orchestrator | 2026-01-30 06:56:39.356483 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 06:56:39.356493 | orchestrator | Friday 30 January 2026 06:56:23 +0000 (0:00:01.446) 1:08:16.630 ******** 2026-01-30 06:56:39.356504 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356514 | orchestrator | 2026-01-30 06:56:39.356551 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-30 06:56:39.356563 | orchestrator | Friday 30 January 2026 06:56:24 +0000 (0:00:01.103) 1:08:17.734 ******** 2026-01-30 06:56:39.356573 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356617 | orchestrator | 2026-01-30 06:56:39.356652 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-30 06:56:39.356672 | orchestrator | Friday 30 January 2026 06:56:25 +0000 (0:00:01.137) 1:08:18.871 ******** 2026-01-30 06:56:39.356689 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356707 | orchestrator | 2026-01-30 06:56:39.356727 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-30 06:56:39.356748 | orchestrator | Friday 30 January 2026 06:56:26 +0000 (0:00:01.156) 1:08:20.027 ******** 2026-01-30 06:56:39.356768 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:39.356789 | orchestrator | 2026-01-30 06:56:39.356810 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-30 06:56:39.356823 | orchestrator | Friday 30 January 2026 06:56:27 +0000 (0:00:01.118) 1:08:21.146 ******** 2026-01-30 06:56:39.356834 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.356845 | orchestrator | 2026-01-30 06:56:39.356855 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-30 06:56:39.356866 | orchestrator | Friday 30 January 2026 06:56:28 +0000 (0:00:01.198) 1:08:22.344 ******** 2026-01-30 06:56:39.356877 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:56:39.356887 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:56:39.356960 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:56:39.356972 | orchestrator | 2026-01-30 06:56:39.356996 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-30 06:56:39.357020 | orchestrator | Friday 30 January 2026 06:56:30 +0000 (0:00:02.155) 1:08:24.500 ******** 2026-01-30 06:56:39.357044 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:39.357062 | orchestrator | 2026-01-30 06:56:39.357080 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-30 06:56:39.357096 | orchestrator | Friday 30 January 2026 06:56:32 +0000 (0:00:01.278) 1:08:25.779 ******** 2026-01-30 06:56:39.357112 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:56:39.357129 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:56:39.357146 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:56:39.357163 | orchestrator | 2026-01-30 06:56:39.357180 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-30 06:56:39.357198 | orchestrator | Friday 30 January 2026 06:56:35 +0000 (0:00:02.957) 1:08:28.736 ******** 2026-01-30 06:56:39.357217 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:56:39.357234 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:56:39.357250 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:56:39.357268 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:39.357286 | orchestrator | 2026-01-30 06:56:39.357306 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-30 06:56:39.357324 | orchestrator | Friday 30 January 2026 06:56:36 +0000 (0:00:01.423) 1:08:30.160 ******** 2026-01-30 06:56:39.357346 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-30 06:56:39.357370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-30 06:56:39.357388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-30 06:56:39.357424 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:39.357442 | orchestrator | 2026-01-30 06:56:39.357461 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-30 06:56:39.357480 | orchestrator | Friday 30 January 2026 06:56:38 +0000 (0:00:01.638) 1:08:31.799 ******** 2026-01-30 06:56:39.357502 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:39.357545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:58.622706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:58.622804 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.622813 | orchestrator | 2026-01-30 06:56:58.622819 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-30 06:56:58.622825 | orchestrator | Friday 30 January 2026 06:56:39 +0000 (0:00:01.155) 1:08:32.954 ******** 2026-01-30 06:56:58.622842 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2a9cfa0bd5a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-30 06:56:32.713809', 'end': '2026-01-30 06:56:32.761362', 'delta': '0:00:00.047553', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a9cfa0bd5a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-30 06:56:58.622851 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '5f90d45395e7', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-30 06:56:33.343528', 'end': '2026-01-30 06:56:33.395822', 'delta': '0:00:00.052294', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5f90d45395e7'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-30 06:56:58.622856 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '001555f51e11', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-30 06:56:33.939630', 'end': '2026-01-30 06:56:33.992690', 'delta': '0:00:00.053060', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['001555f51e11'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-30 06:56:58.622875 | orchestrator | 2026-01-30 06:56:58.622880 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-30 06:56:58.622884 | orchestrator | Friday 30 January 2026 06:56:41 +0000 (0:00:01.686) 1:08:34.641 ******** 2026-01-30 06:56:58.622888 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.622894 | orchestrator | 2026-01-30 06:56:58.622898 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-30 06:56:58.622902 | orchestrator | Friday 30 January 2026 06:56:42 +0000 (0:00:01.291) 1:08:35.932 ******** 2026-01-30 06:56:58.622906 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.622910 | orchestrator | 2026-01-30 06:56:58.622915 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-30 06:56:58.622922 | orchestrator | Friday 30 January 2026 06:56:43 +0000 (0:00:01.271) 1:08:37.204 ******** 2026-01-30 06:56:58.622928 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.622934 | orchestrator | 2026-01-30 06:56:58.622940 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-30 06:56:58.622947 | orchestrator | Friday 30 January 2026 06:56:44 +0000 (0:00:01.130) 1:08:38.334 ******** 2026-01-30 06:56:58.622954 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-30 06:56:58.622961 | orchestrator | 2026-01-30 06:56:58.622967 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:56:58.622973 | orchestrator | Friday 30 January 2026 06:56:46 +0000 (0:00:01.999) 1:08:40.334 ******** 2026-01-30 06:56:58.622980 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.622986 | orchestrator | 2026-01-30 06:56:58.622992 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-30 06:56:58.622998 | orchestrator | Friday 30 January 2026 06:56:47 +0000 (0:00:01.126) 1:08:41.461 ******** 2026-01-30 06:56:58.623018 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623024 | orchestrator | 2026-01-30 06:56:58.623031 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-30 06:56:58.623038 | orchestrator | Friday 30 January 2026 06:56:48 +0000 (0:00:01.151) 1:08:42.613 ******** 2026-01-30 06:56:58.623043 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623049 | orchestrator | 2026-01-30 06:56:58.623055 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-30 06:56:58.623061 | orchestrator | Friday 30 January 2026 06:56:50 +0000 (0:00:01.211) 1:08:43.824 ******** 2026-01-30 06:56:58.623068 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623074 | orchestrator | 2026-01-30 06:56:58.623080 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-30 06:56:58.623086 | orchestrator | Friday 30 January 2026 06:56:51 +0000 (0:00:01.203) 1:08:45.027 ******** 2026-01-30 06:56:58.623092 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623098 | orchestrator | 2026-01-30 06:56:58.623104 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-30 06:56:58.623114 | orchestrator | Friday 30 January 2026 06:56:52 +0000 (0:00:01.123) 1:08:46.151 ******** 2026-01-30 06:56:58.623120 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.623126 | orchestrator | 2026-01-30 06:56:58.623132 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-30 06:56:58.623139 | orchestrator | Friday 30 January 2026 06:56:53 +0000 (0:00:01.150) 1:08:47.302 ******** 2026-01-30 06:56:58.623145 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623151 | orchestrator | 2026-01-30 06:56:58.623158 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-30 06:56:58.623170 | orchestrator | Friday 30 January 2026 06:56:54 +0000 (0:00:01.147) 1:08:48.449 ******** 2026-01-30 06:56:58.623177 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.623183 | orchestrator | 2026-01-30 06:56:58.623197 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-30 06:56:58.623203 | orchestrator | Friday 30 January 2026 06:56:56 +0000 (0:00:01.204) 1:08:49.654 ******** 2026-01-30 06:56:58.623210 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:58.623216 | orchestrator | 2026-01-30 06:56:58.623223 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-30 06:56:58.623231 | orchestrator | Friday 30 January 2026 06:56:57 +0000 (0:00:01.127) 1:08:50.782 ******** 2026-01-30 06:56:58.623238 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:56:58.623245 | orchestrator | 2026-01-30 06:56:58.623252 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-30 06:56:58.623258 | orchestrator | Friday 30 January 2026 06:56:58 +0000 (0:00:01.224) 1:08:52.007 ******** 2026-01-30 06:56:58.623266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:58.623274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}})  2026-01-30 06:56:58.623281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:56:58.623294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}})  2026-01-30 06:56:59.751004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751118 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-01-30 06:56:59.751187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}})  2026-01-30 06:56:59.751245 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}})  2026-01-30 06:56:59.751256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751282 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-01-30 06:56:59.751293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-01-30 06:56:59.751347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-01-30 06:56:59.963604 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:56:59.963680 | orchestrator | 2026-01-30 06:56:59.963688 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-30 06:56:59.963694 | orchestrator | Friday 30 January 2026 06:56:59 +0000 (0:00:01.344) 1:08:53.352 ******** 2026-01-30 06:56:59.963715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd', 'dm-uuid-LVM-e25B62TcR7m1aKxZdFFNfCoPo2hiWbqFyQ0Rz2dNQZbt8knuAMu5WysfjiIW5D3w'], 'uuids': ['a3f925e6-2085-4b8c-91be-2cc24bf9419d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290', 'scsi-SQEMU_QEMU_HARDDISK_5a64c5df-bd04-40a2-9182-2fad2953f290'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5a64c5df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963738 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1uohJ9-WB0A-S0d6-HKW1-Rhm5-CrkX-vckrMn', 'scsi-0QEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de', 'scsi-SQEMU_QEMU_HARDDISK_6d18679f-3a03-46cd-a085-d473f98711de'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963783 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-01-30-02-37-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs', 'dm-uuid-CRYPT-LUKS2-637bf93ed542432381ae3194718153fd-TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:56:59.963818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c96ee3ed--1860--5729--adba--bbe0a3b53c50-osd--block--c96ee3ed--1860--5729--adba--bbe0a3b53c50', 'dm-uuid-LVM-X0hpJnLn1EP2KwwCaQMBl2350ulPjIj3TklgUpxdoknqVj7QWJpteNEbtSyswjBs'], 'uuids': ['637bf93e-d542-4323-81ae-3194718153fd'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d18679f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['TklgUp-xdok-nqVj-7QWJ-pteN-EbtS-yswjBs']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171381 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6YLLCn-05NK-7EBi-pusT-724G-6pao-IOT8I4', 'scsi-0QEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660', 'scsi-SQEMU_QEMU_HARDDISK_2ae1d0dd-0196-4b2a-8ddd-94d4cb6bb660'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ae1d0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd-osd--block--484c5dd7--ec3c--5b7c--8938--cd2a84a156dd']}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171492 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '78d852ad', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1', 'scsi-SQEMU_QEMU_HARDDISK_78d852ad-2d79-4944-8416-895694d96844-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171536 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171621 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w', 'dm-uuid-CRYPT-LUKS2-a3f925e620854b8c91be2cc24bf9419d-yQ0Rz2-dNQZ-bt8k-nuAM-u5Wy-sfji-IW5D3w'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-01-30 06:57:13.171629 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:13.171635 | orchestrator | 2026-01-30 06:57:13.171640 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-30 06:57:13.171646 | orchestrator | Friday 30 January 2026 06:57:01 +0000 (0:00:01.419) 1:08:54.772 ******** 2026-01-30 06:57:13.171651 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:13.171657 | orchestrator | 2026-01-30 06:57:13.171662 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-30 06:57:13.171667 | orchestrator | Friday 30 January 2026 06:57:02 +0000 (0:00:01.496) 1:08:56.268 ******** 2026-01-30 06:57:13.171672 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:13.171676 | orchestrator | 2026-01-30 06:57:13.171681 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:57:13.171685 | orchestrator | Friday 30 January 2026 06:57:03 +0000 (0:00:01.114) 1:08:57.383 ******** 2026-01-30 06:57:13.171690 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:13.171694 | orchestrator | 2026-01-30 06:57:13.171699 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:57:13.171703 | orchestrator | Friday 30 January 2026 06:57:05 +0000 (0:00:01.497) 1:08:58.881 ******** 2026-01-30 06:57:13.171708 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:13.171713 | orchestrator | 2026-01-30 06:57:13.171717 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-30 06:57:13.171722 | orchestrator | Friday 30 January 2026 06:57:06 +0000 (0:00:01.124) 1:09:00.005 ******** 2026-01-30 06:57:13.171732 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:13.171736 | orchestrator | 2026-01-30 06:57:13.171741 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-30 06:57:13.171745 | orchestrator | Friday 30 January 2026 06:57:08 +0000 (0:00:01.683) 1:09:01.688 ******** 2026-01-30 06:57:13.171750 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:13.171754 | orchestrator | 2026-01-30 06:57:13.171759 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-30 06:57:13.171763 | orchestrator | Friday 30 January 2026 06:57:09 +0000 (0:00:01.140) 1:09:02.829 ******** 2026-01-30 06:57:13.171768 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-30 06:57:13.171773 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-30 06:57:13.171778 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-30 06:57:13.171782 | orchestrator | 2026-01-30 06:57:13.171787 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-30 06:57:13.171791 | orchestrator | Friday 30 January 2026 06:57:10 +0000 (0:00:01.676) 1:09:04.505 ******** 2026-01-30 06:57:13.171796 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-30 06:57:13.171801 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-30 06:57:13.171805 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-30 06:57:13.171810 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:13.171814 | orchestrator | 2026-01-30 06:57:13.171819 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-30 06:57:13.171823 | orchestrator | Friday 30 January 2026 06:57:12 +0000 (0:00:01.121) 1:09:05.627 ******** 2026-01-30 06:57:13.171828 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-01-30 06:57:13.171833 | orchestrator | 2026-01-30 06:57:13.171842 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:57:54.280269 | orchestrator | Friday 30 January 2026 06:57:13 +0000 (0:00:01.140) 1:09:06.768 ******** 2026-01-30 06:57:54.280363 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280373 | orchestrator | 2026-01-30 06:57:54.280380 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:57:54.280387 | orchestrator | Friday 30 January 2026 06:57:14 +0000 (0:00:01.120) 1:09:07.888 ******** 2026-01-30 06:57:54.280393 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280400 | orchestrator | 2026-01-30 06:57:54.280406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 06:57:54.280413 | orchestrator | Friday 30 January 2026 06:57:15 +0000 (0:00:01.122) 1:09:09.011 ******** 2026-01-30 06:57:54.280419 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280425 | orchestrator | 2026-01-30 06:57:54.280431 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 06:57:54.280438 | orchestrator | Friday 30 January 2026 06:57:16 +0000 (0:00:00.962) 1:09:09.974 ******** 2026-01-30 06:57:54.280444 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.280452 | orchestrator | 2026-01-30 06:57:54.280458 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 06:57:54.280464 | orchestrator | Friday 30 January 2026 06:57:17 +0000 (0:00:01.002) 1:09:10.976 ******** 2026-01-30 06:57:54.280471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:57:54.280477 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:57:54.280483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:57:54.280489 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280495 | orchestrator | 2026-01-30 06:57:54.280502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 06:57:54.280550 | orchestrator | Friday 30 January 2026 06:57:18 +0000 (0:00:01.367) 1:09:12.343 ******** 2026-01-30 06:57:54.280576 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:57:54.280583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:57:54.280589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:57:54.280595 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280601 | orchestrator | 2026-01-30 06:57:54.280607 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 06:57:54.280613 | orchestrator | Friday 30 January 2026 06:57:20 +0000 (0:00:01.569) 1:09:13.913 ******** 2026-01-30 06:57:54.280619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 06:57:54.280625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 06:57:54.280632 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 06:57:54.280638 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.280644 | orchestrator | 2026-01-30 06:57:54.280650 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 06:57:54.280656 | orchestrator | Friday 30 January 2026 06:57:21 +0000 (0:00:01.571) 1:09:15.484 ******** 2026-01-30 06:57:54.280662 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.280668 | orchestrator | 2026-01-30 06:57:54.280674 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 06:57:54.280681 | orchestrator | Friday 30 January 2026 06:57:23 +0000 (0:00:01.181) 1:09:16.666 ******** 2026-01-30 06:57:54.280687 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 06:57:54.280693 | orchestrator | 2026-01-30 06:57:54.280699 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-30 06:57:54.280705 | orchestrator | Friday 30 January 2026 06:57:24 +0000 (0:00:01.339) 1:09:18.006 ******** 2026-01-30 06:57:54.280712 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:57:54.280720 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:57:54.280726 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:57:54.280732 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:57:54.280738 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:57:54.280744 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:57:54.280750 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:57:54.280756 | orchestrator | 2026-01-30 06:57:54.280762 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-30 06:57:54.280769 | orchestrator | Friday 30 January 2026 06:57:26 +0000 (0:00:01.873) 1:09:19.879 ******** 2026-01-30 06:57:54.280775 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-30 06:57:54.280781 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-30 06:57:54.280787 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-30 06:57:54.280793 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-01-30 06:57:54.280799 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-30 06:57:54.280805 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-01-30 06:57:54.280811 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-30 06:57:54.280817 | orchestrator | 2026-01-30 06:57:54.280824 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-01-30 06:57:54.280831 | orchestrator | Friday 30 January 2026 06:57:28 +0000 (0:00:02.263) 1:09:22.143 ******** 2026-01-30 06:57:54.280838 | orchestrator | changed: [testbed-node-5] 2026-01-30 06:57:54.280845 | orchestrator | 2026-01-30 06:57:54.280865 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-01-30 06:57:54.280877 | orchestrator | Friday 30 January 2026 06:57:30 +0000 (0:00:02.060) 1:09:24.203 ******** 2026-01-30 06:57:54.280885 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:57:54.280892 | orchestrator | 2026-01-30 06:57:54.280899 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-01-30 06:57:54.280906 | orchestrator | Friday 30 January 2026 06:57:33 +0000 (0:00:02.564) 1:09:26.768 ******** 2026-01-30 06:57:54.280914 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:57:54.280921 | orchestrator | 2026-01-30 06:57:54.280928 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 06:57:54.280935 | orchestrator | Friday 30 January 2026 06:57:35 +0000 (0:00:01.983) 1:09:28.751 ******** 2026-01-30 06:57:54.280943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-01-30 06:57:54.280950 | orchestrator | 2026-01-30 06:57:54.280957 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 06:57:54.280964 | orchestrator | Friday 30 January 2026 06:57:36 +0000 (0:00:01.133) 1:09:29.885 ******** 2026-01-30 06:57:54.280971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-01-30 06:57:54.280978 | orchestrator | 2026-01-30 06:57:54.280984 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 06:57:54.280991 | orchestrator | Friday 30 January 2026 06:57:37 +0000 (0:00:01.148) 1:09:31.033 ******** 2026-01-30 06:57:54.280998 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281005 | orchestrator | 2026-01-30 06:57:54.281012 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 06:57:54.281019 | orchestrator | Friday 30 January 2026 06:57:38 +0000 (0:00:01.130) 1:09:32.164 ******** 2026-01-30 06:57:54.281026 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281033 | orchestrator | 2026-01-30 06:57:54.281045 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 06:57:54.281055 | orchestrator | Friday 30 January 2026 06:57:40 +0000 (0:00:01.512) 1:09:33.676 ******** 2026-01-30 06:57:54.281067 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281080 | orchestrator | 2026-01-30 06:57:54.281095 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 06:57:54.281105 | orchestrator | Friday 30 January 2026 06:57:41 +0000 (0:00:01.503) 1:09:35.180 ******** 2026-01-30 06:57:54.281116 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281128 | orchestrator | 2026-01-30 06:57:54.281139 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 06:57:54.281149 | orchestrator | Friday 30 January 2026 06:57:43 +0000 (0:00:01.528) 1:09:36.708 ******** 2026-01-30 06:57:54.281159 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281171 | orchestrator | 2026-01-30 06:57:54.281182 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 06:57:54.281193 | orchestrator | Friday 30 January 2026 06:57:44 +0000 (0:00:01.128) 1:09:37.836 ******** 2026-01-30 06:57:54.281201 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281207 | orchestrator | 2026-01-30 06:57:54.281213 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 06:57:54.281219 | orchestrator | Friday 30 January 2026 06:57:45 +0000 (0:00:01.097) 1:09:38.934 ******** 2026-01-30 06:57:54.281225 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281231 | orchestrator | 2026-01-30 06:57:54.281237 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 06:57:54.281244 | orchestrator | Friday 30 January 2026 06:57:46 +0000 (0:00:01.121) 1:09:40.055 ******** 2026-01-30 06:57:54.281250 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281256 | orchestrator | 2026-01-30 06:57:54.281262 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 06:57:54.281275 | orchestrator | Friday 30 January 2026 06:57:47 +0000 (0:00:01.525) 1:09:41.581 ******** 2026-01-30 06:57:54.281281 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281287 | orchestrator | 2026-01-30 06:57:54.281293 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 06:57:54.281300 | orchestrator | Friday 30 January 2026 06:57:49 +0000 (0:00:01.588) 1:09:43.169 ******** 2026-01-30 06:57:54.281306 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281312 | orchestrator | 2026-01-30 06:57:54.281318 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 06:57:54.281324 | orchestrator | Friday 30 January 2026 06:57:50 +0000 (0:00:00.761) 1:09:43.931 ******** 2026-01-30 06:57:54.281330 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281336 | orchestrator | 2026-01-30 06:57:54.281342 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 06:57:54.281348 | orchestrator | Friday 30 January 2026 06:57:51 +0000 (0:00:00.773) 1:09:44.705 ******** 2026-01-30 06:57:54.281354 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281361 | orchestrator | 2026-01-30 06:57:54.281367 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 06:57:54.281373 | orchestrator | Friday 30 January 2026 06:57:51 +0000 (0:00:00.781) 1:09:45.486 ******** 2026-01-30 06:57:54.281379 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281385 | orchestrator | 2026-01-30 06:57:54.281391 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 06:57:54.281397 | orchestrator | Friday 30 January 2026 06:57:52 +0000 (0:00:00.783) 1:09:46.270 ******** 2026-01-30 06:57:54.281404 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:57:54.281410 | orchestrator | 2026-01-30 06:57:54.281416 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 06:57:54.281422 | orchestrator | Friday 30 January 2026 06:57:53 +0000 (0:00:00.778) 1:09:47.049 ******** 2026-01-30 06:57:54.281428 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:57:54.281434 | orchestrator | 2026-01-30 06:57:54.281445 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 06:58:34.538288 | orchestrator | Friday 30 January 2026 06:57:54 +0000 (0:00:00.831) 1:09:47.881 ******** 2026-01-30 06:58:34.538398 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538413 | orchestrator | 2026-01-30 06:58:34.538424 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 06:58:34.538433 | orchestrator | Friday 30 January 2026 06:57:55 +0000 (0:00:00.764) 1:09:48.645 ******** 2026-01-30 06:58:34.538442 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538451 | orchestrator | 2026-01-30 06:58:34.538460 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 06:58:34.538531 | orchestrator | Friday 30 January 2026 06:57:55 +0000 (0:00:00.767) 1:09:49.413 ******** 2026-01-30 06:58:34.538543 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.538553 | orchestrator | 2026-01-30 06:58:34.538562 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 06:58:34.538571 | orchestrator | Friday 30 January 2026 06:57:56 +0000 (0:00:00.823) 1:09:50.237 ******** 2026-01-30 06:58:34.538579 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.538588 | orchestrator | 2026-01-30 06:58:34.538597 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-01-30 06:58:34.538606 | orchestrator | Friday 30 January 2026 06:57:57 +0000 (0:00:00.786) 1:09:51.023 ******** 2026-01-30 06:58:34.538614 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538623 | orchestrator | 2026-01-30 06:58:34.538632 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-01-30 06:58:34.538640 | orchestrator | Friday 30 January 2026 06:57:58 +0000 (0:00:00.842) 1:09:51.865 ******** 2026-01-30 06:58:34.538649 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538657 | orchestrator | 2026-01-30 06:58:34.538666 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-01-30 06:58:34.538696 | orchestrator | Friday 30 January 2026 06:57:59 +0000 (0:00:00.772) 1:09:52.638 ******** 2026-01-30 06:58:34.538705 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538714 | orchestrator | 2026-01-30 06:58:34.538723 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-01-30 06:58:34.538731 | orchestrator | Friday 30 January 2026 06:57:59 +0000 (0:00:00.775) 1:09:53.414 ******** 2026-01-30 06:58:34.538739 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538748 | orchestrator | 2026-01-30 06:58:34.538756 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-01-30 06:58:34.538765 | orchestrator | Friday 30 January 2026 06:58:00 +0000 (0:00:00.772) 1:09:54.187 ******** 2026-01-30 06:58:34.538773 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538782 | orchestrator | 2026-01-30 06:58:34.538790 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-01-30 06:58:34.538803 | orchestrator | Friday 30 January 2026 06:58:01 +0000 (0:00:00.771) 1:09:54.958 ******** 2026-01-30 06:58:34.538813 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538823 | orchestrator | 2026-01-30 06:58:34.538833 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-01-30 06:58:34.538842 | orchestrator | Friday 30 January 2026 06:58:02 +0000 (0:00:00.757) 1:09:55.716 ******** 2026-01-30 06:58:34.538852 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538862 | orchestrator | 2026-01-30 06:58:34.538871 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-01-30 06:58:34.538882 | orchestrator | Friday 30 January 2026 06:58:02 +0000 (0:00:00.759) 1:09:56.475 ******** 2026-01-30 06:58:34.538892 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538901 | orchestrator | 2026-01-30 06:58:34.538911 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-01-30 06:58:34.538921 | orchestrator | Friday 30 January 2026 06:58:03 +0000 (0:00:00.831) 1:09:57.308 ******** 2026-01-30 06:58:34.538931 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538941 | orchestrator | 2026-01-30 06:58:34.538951 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-01-30 06:58:34.538960 | orchestrator | Friday 30 January 2026 06:58:04 +0000 (0:00:00.762) 1:09:58.071 ******** 2026-01-30 06:58:34.538968 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.538977 | orchestrator | 2026-01-30 06:58:34.538985 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-01-30 06:58:34.538994 | orchestrator | Friday 30 January 2026 06:58:05 +0000 (0:00:00.750) 1:09:58.821 ******** 2026-01-30 06:58:34.539002 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539011 | orchestrator | 2026-01-30 06:58:34.539019 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-01-30 06:58:34.539028 | orchestrator | Friday 30 January 2026 06:58:05 +0000 (0:00:00.765) 1:09:59.586 ******** 2026-01-30 06:58:34.539036 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539045 | orchestrator | 2026-01-30 06:58:34.539054 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-30 06:58:34.539062 | orchestrator | Friday 30 January 2026 06:58:06 +0000 (0:00:00.760) 1:10:00.347 ******** 2026-01-30 06:58:34.539071 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539079 | orchestrator | 2026-01-30 06:58:34.539088 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-30 06:58:34.539096 | orchestrator | Friday 30 January 2026 06:58:08 +0000 (0:00:01.625) 1:10:01.972 ******** 2026-01-30 06:58:34.539105 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539114 | orchestrator | 2026-01-30 06:58:34.539123 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-30 06:58:34.539131 | orchestrator | Friday 30 January 2026 06:58:10 +0000 (0:00:01.920) 1:10:03.893 ******** 2026-01-30 06:58:34.539140 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-01-30 06:58:34.539156 | orchestrator | 2026-01-30 06:58:34.539164 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-30 06:58:34.539173 | orchestrator | Friday 30 January 2026 06:58:11 +0000 (0:00:01.133) 1:10:05.026 ******** 2026-01-30 06:58:34.539182 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539190 | orchestrator | 2026-01-30 06:58:34.539199 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-30 06:58:34.539222 | orchestrator | Friday 30 January 2026 06:58:12 +0000 (0:00:01.141) 1:10:06.167 ******** 2026-01-30 06:58:34.539232 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539241 | orchestrator | 2026-01-30 06:58:34.539249 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-30 06:58:34.539258 | orchestrator | Friday 30 January 2026 06:58:13 +0000 (0:00:01.122) 1:10:07.290 ******** 2026-01-30 06:58:34.539267 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-30 06:58:34.539275 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-30 06:58:34.539284 | orchestrator | 2026-01-30 06:58:34.539293 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-30 06:58:34.539301 | orchestrator | Friday 30 January 2026 06:58:15 +0000 (0:00:01.788) 1:10:09.078 ******** 2026-01-30 06:58:34.539310 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539319 | orchestrator | 2026-01-30 06:58:34.539327 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-30 06:58:34.539336 | orchestrator | Friday 30 January 2026 06:58:16 +0000 (0:00:01.436) 1:10:10.515 ******** 2026-01-30 06:58:34.539344 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539353 | orchestrator | 2026-01-30 06:58:34.539361 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-30 06:58:34.539370 | orchestrator | Friday 30 January 2026 06:58:18 +0000 (0:00:01.293) 1:10:11.809 ******** 2026-01-30 06:58:34.539378 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539387 | orchestrator | 2026-01-30 06:58:34.539395 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-30 06:58:34.539404 | orchestrator | Friday 30 January 2026 06:58:19 +0000 (0:00:00.868) 1:10:12.677 ******** 2026-01-30 06:58:34.539412 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539421 | orchestrator | 2026-01-30 06:58:34.539430 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-30 06:58:34.539438 | orchestrator | Friday 30 January 2026 06:58:19 +0000 (0:00:00.750) 1:10:13.427 ******** 2026-01-30 06:58:34.539447 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-01-30 06:58:34.539455 | orchestrator | 2026-01-30 06:58:34.539463 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-30 06:58:34.539493 | orchestrator | Friday 30 January 2026 06:58:20 +0000 (0:00:01.104) 1:10:14.532 ******** 2026-01-30 06:58:34.539502 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539511 | orchestrator | 2026-01-30 06:58:34.539520 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-30 06:58:34.539528 | orchestrator | Friday 30 January 2026 06:58:22 +0000 (0:00:01.689) 1:10:16.222 ******** 2026-01-30 06:58:34.539537 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-30 06:58:34.539545 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-30 06:58:34.539553 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-30 06:58:34.539562 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539570 | orchestrator | 2026-01-30 06:58:34.539579 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-30 06:58:34.539587 | orchestrator | Friday 30 January 2026 06:58:23 +0000 (0:00:01.184) 1:10:17.407 ******** 2026-01-30 06:58:34.539596 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539604 | orchestrator | 2026-01-30 06:58:34.539620 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-30 06:58:34.539628 | orchestrator | Friday 30 January 2026 06:58:24 +0000 (0:00:01.113) 1:10:18.520 ******** 2026-01-30 06:58:34.539637 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539645 | orchestrator | 2026-01-30 06:58:34.539654 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-30 06:58:34.539662 | orchestrator | Friday 30 January 2026 06:58:26 +0000 (0:00:01.179) 1:10:19.699 ******** 2026-01-30 06:58:34.539671 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539679 | orchestrator | 2026-01-30 06:58:34.539688 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-30 06:58:34.539696 | orchestrator | Friday 30 January 2026 06:58:27 +0000 (0:00:01.133) 1:10:20.833 ******** 2026-01-30 06:58:34.539705 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539713 | orchestrator | 2026-01-30 06:58:34.539721 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-30 06:58:34.539730 | orchestrator | Friday 30 January 2026 06:58:28 +0000 (0:00:01.190) 1:10:22.024 ******** 2026-01-30 06:58:34.539738 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539747 | orchestrator | 2026-01-30 06:58:34.539755 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-30 06:58:34.539764 | orchestrator | Friday 30 January 2026 06:58:29 +0000 (0:00:00.833) 1:10:22.858 ******** 2026-01-30 06:58:34.539772 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539781 | orchestrator | 2026-01-30 06:58:34.539789 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-30 06:58:34.539798 | orchestrator | Friday 30 January 2026 06:58:31 +0000 (0:00:02.176) 1:10:25.034 ******** 2026-01-30 06:58:34.539806 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:58:34.539815 | orchestrator | 2026-01-30 06:58:34.539823 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-30 06:58:34.539831 | orchestrator | Friday 30 January 2026 06:58:32 +0000 (0:00:00.861) 1:10:25.896 ******** 2026-01-30 06:58:34.539840 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-01-30 06:58:34.539848 | orchestrator | 2026-01-30 06:58:34.539857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-30 06:58:34.539865 | orchestrator | Friday 30 January 2026 06:58:33 +0000 (0:00:01.099) 1:10:26.996 ******** 2026-01-30 06:58:34.539874 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:58:34.539883 | orchestrator | 2026-01-30 06:58:34.539891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-30 06:58:34.539905 | orchestrator | Friday 30 January 2026 06:58:34 +0000 (0:00:01.142) 1:10:28.138 ******** 2026-01-30 06:59:16.018899 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019017 | orchestrator | 2026-01-30 06:59:16.019035 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-30 06:59:16.019048 | orchestrator | Friday 30 January 2026 06:58:35 +0000 (0:00:01.157) 1:10:29.296 ******** 2026-01-30 06:59:16.019059 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019070 | orchestrator | 2026-01-30 06:59:16.019082 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-30 06:59:16.019094 | orchestrator | Friday 30 January 2026 06:58:36 +0000 (0:00:01.145) 1:10:30.441 ******** 2026-01-30 06:59:16.019105 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019116 | orchestrator | 2026-01-30 06:59:16.019127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-30 06:59:16.019138 | orchestrator | Friday 30 January 2026 06:58:37 +0000 (0:00:01.127) 1:10:31.569 ******** 2026-01-30 06:59:16.019149 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019160 | orchestrator | 2026-01-30 06:59:16.019171 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-30 06:59:16.019182 | orchestrator | Friday 30 January 2026 06:58:39 +0000 (0:00:01.199) 1:10:32.768 ******** 2026-01-30 06:59:16.019193 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019229 | orchestrator | 2026-01-30 06:59:16.019241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-30 06:59:16.019252 | orchestrator | Friday 30 January 2026 06:58:40 +0000 (0:00:01.130) 1:10:33.899 ******** 2026-01-30 06:59:16.019263 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019369 | orchestrator | 2026-01-30 06:59:16.019381 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-30 06:59:16.019392 | orchestrator | Friday 30 January 2026 06:58:41 +0000 (0:00:01.156) 1:10:35.055 ******** 2026-01-30 06:59:16.019403 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.019414 | orchestrator | 2026-01-30 06:59:16.019426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-30 06:59:16.019472 | orchestrator | Friday 30 January 2026 06:58:42 +0000 (0:00:01.152) 1:10:36.208 ******** 2026-01-30 06:59:16.019490 | orchestrator | ok: [testbed-node-5] 2026-01-30 06:59:16.019504 | orchestrator | 2026-01-30 06:59:16.019517 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-30 06:59:16.019530 | orchestrator | Friday 30 January 2026 06:58:43 +0000 (0:00:00.789) 1:10:36.997 ******** 2026-01-30 06:59:16.019542 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-01-30 06:59:16.019556 | orchestrator | 2026-01-30 06:59:16.019569 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-30 06:59:16.019582 | orchestrator | Friday 30 January 2026 06:58:44 +0000 (0:00:01.221) 1:10:38.219 ******** 2026-01-30 06:59:16.019594 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-01-30 06:59:16.019608 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-30 06:59:16.019620 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-30 06:59:16.019644 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-30 06:59:16.019657 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-30 06:59:16.019670 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-30 06:59:16.019683 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-30 06:59:16.019696 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-30 06:59:16.019709 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-30 06:59:16.019722 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-30 06:59:16.019735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-30 06:59:16.019748 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-30 06:59:16.019761 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-30 06:59:16.019775 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-30 06:59:16.019788 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-01-30 06:59:16.019801 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-01-30 06:59:16.019812 | orchestrator | 2026-01-30 06:59:16.019822 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-30 06:59:16.019833 | orchestrator | Friday 30 January 2026 06:58:51 +0000 (0:00:06.510) 1:10:44.730 ******** 2026-01-30 06:59:16.019844 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-01-30 06:59:16.019855 | orchestrator | 2026-01-30 06:59:16.019866 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-30 06:59:16.019877 | orchestrator | Friday 30 January 2026 06:58:52 +0000 (0:00:01.109) 1:10:45.839 ******** 2026-01-30 06:59:16.019888 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:59:16.019900 | orchestrator | 2026-01-30 06:59:16.019912 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-30 06:59:16.019923 | orchestrator | Friday 30 January 2026 06:58:53 +0000 (0:00:01.533) 1:10:47.373 ******** 2026-01-30 06:59:16.019945 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:59:16.019956 | orchestrator | 2026-01-30 06:59:16.019967 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-30 06:59:16.020020 | orchestrator | Friday 30 January 2026 06:58:55 +0000 (0:00:01.632) 1:10:49.006 ******** 2026-01-30 06:59:16.020032 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020043 | orchestrator | 2026-01-30 06:59:16.020054 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-30 06:59:16.020085 | orchestrator | Friday 30 January 2026 06:58:56 +0000 (0:00:00.783) 1:10:49.790 ******** 2026-01-30 06:59:16.020097 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020108 | orchestrator | 2026-01-30 06:59:16.020119 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-30 06:59:16.020130 | orchestrator | Friday 30 January 2026 06:58:56 +0000 (0:00:00.780) 1:10:50.570 ******** 2026-01-30 06:59:16.020140 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020151 | orchestrator | 2026-01-30 06:59:16.020162 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-30 06:59:16.020173 | orchestrator | Friday 30 January 2026 06:58:57 +0000 (0:00:00.755) 1:10:51.326 ******** 2026-01-30 06:59:16.020184 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020195 | orchestrator | 2026-01-30 06:59:16.020206 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-30 06:59:16.020217 | orchestrator | Friday 30 January 2026 06:58:58 +0000 (0:00:00.781) 1:10:52.108 ******** 2026-01-30 06:59:16.020228 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020239 | orchestrator | 2026-01-30 06:59:16.020250 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-30 06:59:16.020261 | orchestrator | Friday 30 January 2026 06:58:59 +0000 (0:00:00.783) 1:10:52.891 ******** 2026-01-30 06:59:16.020272 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020283 | orchestrator | 2026-01-30 06:59:16.020293 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-30 06:59:16.020304 | orchestrator | Friday 30 January 2026 06:59:00 +0000 (0:00:00.774) 1:10:53.666 ******** 2026-01-30 06:59:16.020315 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020326 | orchestrator | 2026-01-30 06:59:16.020337 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-30 06:59:16.020348 | orchestrator | Friday 30 January 2026 06:59:00 +0000 (0:00:00.827) 1:10:54.493 ******** 2026-01-30 06:59:16.020359 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020370 | orchestrator | 2026-01-30 06:59:16.020380 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-30 06:59:16.020391 | orchestrator | Friday 30 January 2026 06:59:01 +0000 (0:00:00.797) 1:10:55.291 ******** 2026-01-30 06:59:16.020402 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020413 | orchestrator | 2026-01-30 06:59:16.020424 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-30 06:59:16.020458 | orchestrator | Friday 30 January 2026 06:59:02 +0000 (0:00:00.748) 1:10:56.039 ******** 2026-01-30 06:59:16.020470 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020481 | orchestrator | 2026-01-30 06:59:16.020492 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-30 06:59:16.020503 | orchestrator | Friday 30 January 2026 06:59:03 +0000 (0:00:00.779) 1:10:56.819 ******** 2026-01-30 06:59:16.020514 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020525 | orchestrator | 2026-01-30 06:59:16.020536 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-30 06:59:16.020546 | orchestrator | Friday 30 January 2026 06:59:04 +0000 (0:00:00.812) 1:10:57.632 ******** 2026-01-30 06:59:16.020557 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-01-30 06:59:16.020576 | orchestrator | 2026-01-30 06:59:16.020587 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-30 06:59:16.020597 | orchestrator | Friday 30 January 2026 06:59:08 +0000 (0:00:04.152) 1:11:01.784 ******** 2026-01-30 06:59:16.020608 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 06:59:16.020619 | orchestrator | 2026-01-30 06:59:16.020630 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-30 06:59:16.020640 | orchestrator | Friday 30 January 2026 06:59:09 +0000 (0:00:00.835) 1:11:02.620 ******** 2026-01-30 06:59:16.020654 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-01-30 06:59:16.020669 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-01-30 06:59:16.020681 | orchestrator | 2026-01-30 06:59:16.020692 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-30 06:59:16.020797 | orchestrator | Friday 30 January 2026 06:59:13 +0000 (0:00:04.630) 1:11:07.251 ******** 2026-01-30 06:59:16.020811 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020822 | orchestrator | 2026-01-30 06:59:16.020833 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-30 06:59:16.020844 | orchestrator | Friday 30 January 2026 06:59:14 +0000 (0:00:00.809) 1:11:08.060 ******** 2026-01-30 06:59:16.020896 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020909 | orchestrator | 2026-01-30 06:59:16.020919 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-30 06:59:16.020930 | orchestrator | Friday 30 January 2026 06:59:15 +0000 (0:00:00.775) 1:11:08.836 ******** 2026-01-30 06:59:16.020941 | orchestrator | skipping: [testbed-node-5] 2026-01-30 06:59:16.020953 | orchestrator | 2026-01-30 06:59:16.020964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-30 06:59:16.020984 | orchestrator | Friday 30 January 2026 06:59:16 +0000 (0:00:00.782) 1:11:09.619 ******** 2026-01-30 07:00:24.073828 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.073954 | orchestrator | 2026-01-30 07:00:24.073963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-30 07:00:24.073971 | orchestrator | Friday 30 January 2026 06:59:16 +0000 (0:00:00.797) 1:11:10.417 ******** 2026-01-30 07:00:24.073976 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.073982 | orchestrator | 2026-01-30 07:00:24.073988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-30 07:00:24.073993 | orchestrator | Friday 30 January 2026 06:59:17 +0000 (0:00:00.796) 1:11:11.214 ******** 2026-01-30 07:00:24.073999 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:00:24.074005 | orchestrator | 2026-01-30 07:00:24.074011 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-30 07:00:24.074067 | orchestrator | Friday 30 January 2026 06:59:18 +0000 (0:00:01.127) 1:11:12.341 ******** 2026-01-30 07:00:24.074072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 07:00:24.074078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 07:00:24.074084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 07:00:24.074089 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074094 | orchestrator | 2026-01-30 07:00:24.074100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-30 07:00:24.074128 | orchestrator | Friday 30 January 2026 06:59:19 +0000 (0:00:01.050) 1:11:13.392 ******** 2026-01-30 07:00:24.074133 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 07:00:24.074139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 07:00:24.074144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 07:00:24.074173 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074179 | orchestrator | 2026-01-30 07:00:24.074184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-30 07:00:24.074189 | orchestrator | Friday 30 January 2026 06:59:20 +0000 (0:00:01.137) 1:11:14.530 ******** 2026-01-30 07:00:24.074195 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-30 07:00:24.074200 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-30 07:00:24.074206 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-30 07:00:24.074211 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074216 | orchestrator | 2026-01-30 07:00:24.074221 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-30 07:00:24.074226 | orchestrator | Friday 30 January 2026 06:59:21 +0000 (0:00:01.066) 1:11:15.596 ******** 2026-01-30 07:00:24.074231 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:00:24.074236 | orchestrator | 2026-01-30 07:00:24.074241 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-30 07:00:24.074247 | orchestrator | Friday 30 January 2026 06:59:22 +0000 (0:00:00.829) 1:11:16.425 ******** 2026-01-30 07:00:24.074252 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-30 07:00:24.074257 | orchestrator | 2026-01-30 07:00:24.074262 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-30 07:00:24.074267 | orchestrator | Friday 30 January 2026 06:59:23 +0000 (0:00:00.999) 1:11:17.425 ******** 2026-01-30 07:00:24.074272 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:00:24.074281 | orchestrator | 2026-01-30 07:00:24.074290 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-30 07:00:24.074298 | orchestrator | Friday 30 January 2026 06:59:25 +0000 (0:00:01.376) 1:11:18.802 ******** 2026-01-30 07:00:24.074307 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-01-30 07:00:24.074316 | orchestrator | 2026-01-30 07:00:24.074324 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 07:00:24.074333 | orchestrator | Friday 30 January 2026 06:59:26 +0000 (0:00:01.116) 1:11:19.919 ******** 2026-01-30 07:00:24.074342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 07:00:24.074351 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 07:00:24.074360 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 07:00:24.074369 | orchestrator | 2026-01-30 07:00:24.074425 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 07:00:24.074436 | orchestrator | Friday 30 January 2026 06:59:29 +0000 (0:00:03.312) 1:11:23.232 ******** 2026-01-30 07:00:24.074445 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-01-30 07:00:24.074454 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-30 07:00:24.074460 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:00:24.074467 | orchestrator | 2026-01-30 07:00:24.074473 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-30 07:00:24.074478 | orchestrator | Friday 30 January 2026 06:59:31 +0000 (0:00:01.994) 1:11:25.226 ******** 2026-01-30 07:00:24.074483 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074488 | orchestrator | 2026-01-30 07:00:24.074494 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-30 07:00:24.074499 | orchestrator | Friday 30 January 2026 06:59:32 +0000 (0:00:00.799) 1:11:26.026 ******** 2026-01-30 07:00:24.074504 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-01-30 07:00:24.074518 | orchestrator | 2026-01-30 07:00:24.074523 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-30 07:00:24.074528 | orchestrator | Friday 30 January 2026 06:59:33 +0000 (0:00:01.234) 1:11:27.260 ******** 2026-01-30 07:00:24.074535 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 07:00:24.074542 | orchestrator | 2026-01-30 07:00:24.074547 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-30 07:00:24.074553 | orchestrator | Friday 30 January 2026 06:59:35 +0000 (0:00:01.638) 1:11:28.899 ******** 2026-01-30 07:00:24.074576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 07:00:24.074583 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-30 07:00:24.074588 | orchestrator | 2026-01-30 07:00:24.074593 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-30 07:00:24.074599 | orchestrator | Friday 30 January 2026 06:59:40 +0000 (0:00:05.331) 1:11:34.231 ******** 2026-01-30 07:00:24.074604 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-30 07:00:24.074609 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-30 07:00:24.074615 | orchestrator | 2026-01-30 07:00:24.074620 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-30 07:00:24.074625 | orchestrator | Friday 30 January 2026 06:59:43 +0000 (0:00:03.237) 1:11:37.468 ******** 2026-01-30 07:00:24.074631 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-01-30 07:00:24.074636 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:00:24.074641 | orchestrator | 2026-01-30 07:00:24.074646 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-30 07:00:24.074651 | orchestrator | Friday 30 January 2026 06:59:45 +0000 (0:00:01.607) 1:11:39.076 ******** 2026-01-30 07:00:24.074656 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-01-30 07:00:24.074661 | orchestrator | 2026-01-30 07:00:24.074666 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-30 07:00:24.074671 | orchestrator | Friday 30 January 2026 06:59:46 +0000 (0:00:01.159) 1:11:40.235 ******** 2026-01-30 07:00:24.074676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074703 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074708 | orchestrator | 2026-01-30 07:00:24.074713 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-30 07:00:24.074718 | orchestrator | Friday 30 January 2026 06:59:48 +0000 (0:00:01.589) 1:11:41.825 ******** 2026-01-30 07:00:24.074723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074728 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-30 07:00:24.074753 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074758 | orchestrator | 2026-01-30 07:00:24.074763 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-30 07:00:24.074768 | orchestrator | Friday 30 January 2026 06:59:50 +0000 (0:00:02.004) 1:11:43.829 ******** 2026-01-30 07:00:24.074773 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 07:00:24.074778 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 07:00:24.074783 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 07:00:24.074789 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 07:00:24.074795 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-30 07:00:24.074801 | orchestrator | 2026-01-30 07:00:24.074806 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-30 07:00:24.074811 | orchestrator | Friday 30 January 2026 07:00:23 +0000 (0:00:33.072) 1:12:16.901 ******** 2026-01-30 07:00:24.074816 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:00:24.074821 | orchestrator | 2026-01-30 07:00:24.074826 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-30 07:00:24.074835 | orchestrator | Friday 30 January 2026 07:00:24 +0000 (0:00:00.772) 1:12:17.674 ******** 2026-01-30 07:01:18.662691 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.662812 | orchestrator | 2026-01-30 07:01:18.662828 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-30 07:01:18.662842 | orchestrator | Friday 30 January 2026 07:00:24 +0000 (0:00:00.775) 1:12:18.449 ******** 2026-01-30 07:01:18.662862 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-01-30 07:01:18.662887 | orchestrator | 2026-01-30 07:01:18.662904 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-30 07:01:18.662919 | orchestrator | Friday 30 January 2026 07:00:26 +0000 (0:00:01.277) 1:12:19.727 ******** 2026-01-30 07:01:18.662933 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-01-30 07:01:18.662949 | orchestrator | 2026-01-30 07:01:18.662965 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-30 07:01:18.662980 | orchestrator | Friday 30 January 2026 07:00:27 +0000 (0:00:01.095) 1:12:20.823 ******** 2026-01-30 07:01:18.662998 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.663017 | orchestrator | 2026-01-30 07:01:18.663033 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-30 07:01:18.663049 | orchestrator | Friday 30 January 2026 07:00:29 +0000 (0:00:02.035) 1:12:22.859 ******** 2026-01-30 07:01:18.663060 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.663069 | orchestrator | 2026-01-30 07:01:18.663079 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-30 07:01:18.663089 | orchestrator | Friday 30 January 2026 07:00:31 +0000 (0:00:02.023) 1:12:24.882 ******** 2026-01-30 07:01:18.663098 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.663108 | orchestrator | 2026-01-30 07:01:18.663118 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-30 07:01:18.663128 | orchestrator | Friday 30 January 2026 07:00:33 +0000 (0:00:02.280) 1:12:27.163 ******** 2026-01-30 07:01:18.663162 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-30 07:01:18.663173 | orchestrator | 2026-01-30 07:01:18.663183 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-01-30 07:01:18.663194 | orchestrator | skipping: no hosts matched 2026-01-30 07:01:18.663205 | orchestrator | 2026-01-30 07:01:18.663217 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-01-30 07:01:18.663228 | orchestrator | skipping: no hosts matched 2026-01-30 07:01:18.663239 | orchestrator | 2026-01-30 07:01:18.663250 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-01-30 07:01:18.663261 | orchestrator | skipping: no hosts matched 2026-01-30 07:01:18.663272 | orchestrator | 2026-01-30 07:01:18.663284 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-01-30 07:01:18.663295 | orchestrator | 2026-01-30 07:01:18.663306 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-01-30 07:01:18.663318 | orchestrator | Friday 30 January 2026 07:00:38 +0000 (0:00:04.652) 1:12:31.815 ******** 2026-01-30 07:01:18.663380 | orchestrator | changed: [testbed-node-0] 2026-01-30 07:01:18.663394 | orchestrator | changed: [testbed-node-2] 2026-01-30 07:01:18.663406 | orchestrator | changed: [testbed-node-1] 2026-01-30 07:01:18.663418 | orchestrator | changed: [testbed-node-3] 2026-01-30 07:01:18.663429 | orchestrator | changed: [testbed-node-4] 2026-01-30 07:01:18.663441 | orchestrator | changed: [testbed-node-5] 2026-01-30 07:01:18.663452 | orchestrator | 2026-01-30 07:01:18.663463 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-01-30 07:01:18.663475 | orchestrator | Friday 30 January 2026 07:00:41 +0000 (0:00:02.868) 1:12:34.683 ******** 2026-01-30 07:01:18.663485 | orchestrator | changed: [testbed-node-3] 2026-01-30 07:01:18.663497 | orchestrator | changed: [testbed-node-1] 2026-01-30 07:01:18.663508 | orchestrator | changed: [testbed-node-2] 2026-01-30 07:01:18.663520 | orchestrator | changed: [testbed-node-4] 2026-01-30 07:01:18.663531 | orchestrator | changed: [testbed-node-5] 2026-01-30 07:01:18.663542 | orchestrator | changed: [testbed-node-0] 2026-01-30 07:01:18.663553 | orchestrator | 2026-01-30 07:01:18.663564 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 07:01:18.663576 | orchestrator | Friday 30 January 2026 07:00:45 +0000 (0:00:04.473) 1:12:39.157 ******** 2026-01-30 07:01:18.663587 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.663596 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.663606 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.663615 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.663625 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.663646 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.663663 | orchestrator | 2026-01-30 07:01:18.663680 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 07:01:18.663705 | orchestrator | Friday 30 January 2026 07:00:47 +0000 (0:00:02.064) 1:12:41.221 ******** 2026-01-30 07:01:18.663724 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.663741 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.663758 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.663774 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.663789 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.663805 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.663822 | orchestrator | 2026-01-30 07:01:18.663837 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-30 07:01:18.663854 | orchestrator | Friday 30 January 2026 07:00:49 +0000 (0:00:02.319) 1:12:43.541 ******** 2026-01-30 07:01:18.663870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 07:01:18.663887 | orchestrator | 2026-01-30 07:01:18.663906 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-30 07:01:18.663938 | orchestrator | Friday 30 January 2026 07:00:52 +0000 (0:00:02.299) 1:12:45.840 ******** 2026-01-30 07:01:18.663955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 07:01:18.663972 | orchestrator | 2026-01-30 07:01:18.664013 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-30 07:01:18.664031 | orchestrator | Friday 30 January 2026 07:00:54 +0000 (0:00:02.319) 1:12:48.160 ******** 2026-01-30 07:01:18.664047 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.664063 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.664080 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.664099 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.664116 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.664135 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.664153 | orchestrator | 2026-01-30 07:01:18.664170 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-30 07:01:18.664187 | orchestrator | Friday 30 January 2026 07:00:56 +0000 (0:00:02.163) 1:12:50.323 ******** 2026-01-30 07:01:18.664205 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.664221 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.664239 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.664257 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.664274 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.664291 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.664304 | orchestrator | 2026-01-30 07:01:18.664315 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-30 07:01:18.664324 | orchestrator | Friday 30 January 2026 07:00:59 +0000 (0:00:02.481) 1:12:52.805 ******** 2026-01-30 07:01:18.664401 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.664411 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.664421 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.664431 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.664440 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.664450 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.664459 | orchestrator | 2026-01-30 07:01:18.664469 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-30 07:01:18.664479 | orchestrator | Friday 30 January 2026 07:01:01 +0000 (0:00:02.283) 1:12:55.089 ******** 2026-01-30 07:01:18.664489 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.664498 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.664508 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.664517 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.664527 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.664536 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.664546 | orchestrator | 2026-01-30 07:01:18.664556 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-30 07:01:18.664565 | orchestrator | Friday 30 January 2026 07:01:03 +0000 (0:00:02.221) 1:12:57.311 ******** 2026-01-30 07:01:18.664575 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.664584 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.664594 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.664603 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.664613 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.664622 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.664632 | orchestrator | 2026-01-30 07:01:18.664641 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-30 07:01:18.664651 | orchestrator | Friday 30 January 2026 07:01:05 +0000 (0:00:02.056) 1:12:59.367 ******** 2026-01-30 07:01:18.664660 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.664670 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.664679 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.664689 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.664698 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.664718 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.664727 | orchestrator | 2026-01-30 07:01:18.664737 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-30 07:01:18.664747 | orchestrator | Friday 30 January 2026 07:01:07 +0000 (0:00:01.729) 1:13:01.097 ******** 2026-01-30 07:01:18.664756 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.664766 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.664775 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.664784 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.664794 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.664803 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.664813 | orchestrator | 2026-01-30 07:01:18.664822 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-30 07:01:18.664832 | orchestrator | Friday 30 January 2026 07:01:09 +0000 (0:00:02.266) 1:13:03.363 ******** 2026-01-30 07:01:18.664841 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.664851 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.664860 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.664870 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.664879 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.664889 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.664898 | orchestrator | 2026-01-30 07:01:18.664908 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-30 07:01:18.664917 | orchestrator | Friday 30 January 2026 07:01:11 +0000 (0:00:02.234) 1:13:05.598 ******** 2026-01-30 07:01:18.664927 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.664935 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.664943 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.664951 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:01:18.664959 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:01:18.664966 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:01:18.664974 | orchestrator | 2026-01-30 07:01:18.664982 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-30 07:01:18.664990 | orchestrator | Friday 30 January 2026 07:01:14 +0000 (0:00:02.561) 1:13:08.159 ******** 2026-01-30 07:01:18.664997 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:01:18.665005 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:01:18.665013 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:01:18.665021 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.665029 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.665036 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.665044 | orchestrator | 2026-01-30 07:01:18.665052 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-30 07:01:18.665060 | orchestrator | Friday 30 January 2026 07:01:16 +0000 (0:00:02.068) 1:13:10.228 ******** 2026-01-30 07:01:18.665067 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:01:18.665075 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:01:18.665083 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:01:18.665091 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:01:18.665098 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:01:18.665106 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:01:18.665114 | orchestrator | 2026-01-30 07:01:18.665129 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-30 07:02:13.223460 | orchestrator | Friday 30 January 2026 07:01:18 +0000 (0:00:02.027) 1:13:12.256 ******** 2026-01-30 07:02:13.223606 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.223626 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.223639 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.223650 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.223662 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.223674 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.223685 | orchestrator | 2026-01-30 07:02:13.223697 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-30 07:02:13.223708 | orchestrator | Friday 30 January 2026 07:01:20 +0000 (0:00:01.984) 1:13:14.241 ******** 2026-01-30 07:02:13.223745 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.223757 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.223768 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.223778 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.223789 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.223800 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.223812 | orchestrator | 2026-01-30 07:02:13.223832 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-30 07:02:13.223851 | orchestrator | Friday 30 January 2026 07:01:22 +0000 (0:00:02.225) 1:13:16.466 ******** 2026-01-30 07:02:13.223869 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.223888 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.223906 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.223925 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.223945 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.223963 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.223983 | orchestrator | 2026-01-30 07:02:13.223997 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-30 07:02:13.224011 | orchestrator | Friday 30 January 2026 07:01:24 +0000 (0:00:01.879) 1:13:18.346 ******** 2026-01-30 07:02:13.224024 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.224037 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.224050 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.224063 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.224076 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.224089 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.224101 | orchestrator | 2026-01-30 07:02:13.224114 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-30 07:02:13.224127 | orchestrator | Friday 30 January 2026 07:01:26 +0000 (0:00:01.905) 1:13:20.251 ******** 2026-01-30 07:02:13.224140 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.224153 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.224165 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.224178 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.224190 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.224209 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.224228 | orchestrator | 2026-01-30 07:02:13.224246 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-30 07:02:13.224266 | orchestrator | Friday 30 January 2026 07:01:28 +0000 (0:00:01.994) 1:13:22.246 ******** 2026-01-30 07:02:13.224310 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224333 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.224351 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.224370 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.224381 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.224392 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.224403 | orchestrator | 2026-01-30 07:02:13.224414 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-30 07:02:13.224425 | orchestrator | Friday 30 January 2026 07:01:30 +0000 (0:00:02.051) 1:13:24.297 ******** 2026-01-30 07:02:13.224435 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224446 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.224457 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.224468 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.224479 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.224489 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.224501 | orchestrator | 2026-01-30 07:02:13.224512 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-30 07:02:13.224523 | orchestrator | Friday 30 January 2026 07:01:32 +0000 (0:00:01.826) 1:13:26.124 ******** 2026-01-30 07:02:13.224534 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224544 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.224555 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.224571 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.224603 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.224623 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.224641 | orchestrator | 2026-01-30 07:02:13.224660 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-30 07:02:13.224679 | orchestrator | Friday 30 January 2026 07:01:34 +0000 (0:00:01.930) 1:13:28.055 ******** 2026-01-30 07:02:13.224697 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224713 | orchestrator | 2026-01-30 07:02:13.224725 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-30 07:02:13.224736 | orchestrator | Friday 30 January 2026 07:01:37 +0000 (0:00:03.179) 1:13:31.234 ******** 2026-01-30 07:02:13.224746 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224757 | orchestrator | 2026-01-30 07:02:13.224768 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-30 07:02:13.224779 | orchestrator | Friday 30 January 2026 07:01:40 +0000 (0:00:03.238) 1:13:34.473 ******** 2026-01-30 07:02:13.224789 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224800 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.224811 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.224821 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.224832 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.224842 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.224853 | orchestrator | 2026-01-30 07:02:13.224864 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-30 07:02:13.224875 | orchestrator | Friday 30 January 2026 07:01:43 +0000 (0:00:02.492) 1:13:36.966 ******** 2026-01-30 07:02:13.224885 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.224896 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.224907 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.224917 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.224929 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.224948 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.224966 | orchestrator | 2026-01-30 07:02:13.224984 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-30 07:02:13.225028 | orchestrator | Friday 30 January 2026 07:01:45 +0000 (0:00:02.300) 1:13:39.266 ******** 2026-01-30 07:02:13.225051 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-30 07:02:13.225071 | orchestrator | 2026-01-30 07:02:13.225088 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-30 07:02:13.225099 | orchestrator | Friday 30 January 2026 07:01:47 +0000 (0:00:02.158) 1:13:41.424 ******** 2026-01-30 07:02:13.225110 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.225121 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.225132 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.225143 | orchestrator | ok: [testbed-node-3] 2026-01-30 07:02:13.225153 | orchestrator | ok: [testbed-node-4] 2026-01-30 07:02:13.225164 | orchestrator | ok: [testbed-node-5] 2026-01-30 07:02:13.225174 | orchestrator | 2026-01-30 07:02:13.225185 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-30 07:02:13.225197 | orchestrator | Friday 30 January 2026 07:01:50 +0000 (0:00:02.527) 1:13:43.951 ******** 2026-01-30 07:02:13.225208 | orchestrator | changed: [testbed-node-1] 2026-01-30 07:02:13.225219 | orchestrator | changed: [testbed-node-3] 2026-01-30 07:02:13.225230 | orchestrator | changed: [testbed-node-0] 2026-01-30 07:02:13.225241 | orchestrator | changed: [testbed-node-4] 2026-01-30 07:02:13.225251 | orchestrator | changed: [testbed-node-2] 2026-01-30 07:02:13.225262 | orchestrator | changed: [testbed-node-5] 2026-01-30 07:02:13.225273 | orchestrator | 2026-01-30 07:02:13.225311 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-01-30 07:02:13.225332 | orchestrator | 2026-01-30 07:02:13.225351 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 07:02:13.225369 | orchestrator | Friday 30 January 2026 07:01:55 +0000 (0:00:04.739) 1:13:48.691 ******** 2026-01-30 07:02:13.225388 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.225420 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.225438 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.225450 | orchestrator | 2026-01-30 07:02:13.225461 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 07:02:13.225472 | orchestrator | Friday 30 January 2026 07:01:56 +0000 (0:00:01.754) 1:13:50.445 ******** 2026-01-30 07:02:13.225485 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.225504 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:02:13.225521 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:02:13.225538 | orchestrator | 2026-01-30 07:02:13.225555 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-01-30 07:02:13.225574 | orchestrator | Friday 30 January 2026 07:01:58 +0000 (0:00:01.396) 1:13:51.842 ******** 2026-01-30 07:02:13.225593 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:02:13.225612 | orchestrator | 2026-01-30 07:02:13.225630 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-01-30 07:02:13.225664 | orchestrator | Friday 30 January 2026 07:02:00 +0000 (0:00:02.391) 1:13:54.234 ******** 2026-01-30 07:02:13.225695 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.225715 | orchestrator | 2026-01-30 07:02:13.225734 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-01-30 07:02:13.225752 | orchestrator | 2026-01-30 07:02:13.225771 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-01-30 07:02:13.225789 | orchestrator | Friday 30 January 2026 07:02:02 +0000 (0:00:02.213) 1:13:56.448 ******** 2026-01-30 07:02:13.225809 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.225827 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.225841 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.225852 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.225863 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.225873 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.225884 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:02:13.225895 | orchestrator | 2026-01-30 07:02:13.225905 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 07:02:13.225916 | orchestrator | Friday 30 January 2026 07:02:04 +0000 (0:00:01.956) 1:13:58.404 ******** 2026-01-30 07:02:13.225927 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.225938 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.225948 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.225959 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.225969 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.225980 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.225991 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:02:13.226001 | orchestrator | 2026-01-30 07:02:13.226012 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-01-30 07:02:13.226117 | orchestrator | Friday 30 January 2026 07:02:07 +0000 (0:00:02.414) 1:14:00.819 ******** 2026-01-30 07:02:13.226138 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.226158 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.226177 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.226194 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.226210 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.226221 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.226232 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:02:13.226243 | orchestrator | 2026-01-30 07:02:13.226254 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-01-30 07:02:13.226272 | orchestrator | Friday 30 January 2026 07:02:09 +0000 (0:00:02.716) 1:14:03.536 ******** 2026-01-30 07:02:13.226402 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.226424 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.226443 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.226460 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:02:13.226478 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:02:13.226548 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:02:13.226568 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:02:13.226584 | orchestrator | 2026-01-30 07:02:13.226596 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-01-30 07:02:13.226607 | orchestrator | Friday 30 January 2026 07:02:12 +0000 (0:00:02.520) 1:14:06.057 ******** 2026-01-30 07:02:13.226617 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:02:13.226628 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:02:13.226639 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:02:13.226664 | orchestrator | skipping: [testbed-node-3] 2026-01-30 07:03:01.913474 | orchestrator | skipping: [testbed-node-4] 2026-01-30 07:03:01.913596 | orchestrator | skipping: [testbed-node-5] 2026-01-30 07:03:01.913613 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913625 | orchestrator | 2026-01-30 07:03:01.913638 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-01-30 07:03:01.913650 | orchestrator | 2026-01-30 07:03:01.913662 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-01-30 07:03:01.913673 | orchestrator | Friday 30 January 2026 07:02:15 +0000 (0:00:03.339) 1:14:09.396 ******** 2026-01-30 07:03:01.913685 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-01-30 07:03:01.913697 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-01-30 07:03:01.913708 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-01-30 07:03:01.913719 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913730 | orchestrator | 2026-01-30 07:03:01.913741 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-01-30 07:03:01.913752 | orchestrator | Friday 30 January 2026 07:02:16 +0000 (0:00:01.106) 1:14:10.502 ******** 2026-01-30 07:03:01.913762 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913773 | orchestrator | 2026-01-30 07:03:01.913784 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-01-30 07:03:01.913795 | orchestrator | Friday 30 January 2026 07:02:17 +0000 (0:00:01.094) 1:14:11.597 ******** 2026-01-30 07:03:01.913805 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913816 | orchestrator | 2026-01-30 07:03:01.913827 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-01-30 07:03:01.913838 | orchestrator | Friday 30 January 2026 07:02:19 +0000 (0:00:01.127) 1:14:12.724 ******** 2026-01-30 07:03:01.913848 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913859 | orchestrator | 2026-01-30 07:03:01.913870 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-01-30 07:03:01.913881 | orchestrator | Friday 30 January 2026 07:02:20 +0000 (0:00:01.115) 1:14:13.840 ******** 2026-01-30 07:03:01.913894 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.913912 | orchestrator | 2026-01-30 07:03:01.913940 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-01-30 07:03:01.913960 | orchestrator | Friday 30 January 2026 07:02:21 +0000 (0:00:01.190) 1:14:15.030 ******** 2026-01-30 07:03:01.913978 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-01-30 07:03:01.913996 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-01-30 07:03:01.914014 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914243 | orchestrator | 2026-01-30 07:03:01.914295 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-01-30 07:03:01.914313 | orchestrator | Friday 30 January 2026 07:02:22 +0000 (0:00:01.108) 1:14:16.139 ******** 2026-01-30 07:03:01.914332 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914351 | orchestrator | 2026-01-30 07:03:01.914370 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-01-30 07:03:01.914387 | orchestrator | Friday 30 January 2026 07:02:23 +0000 (0:00:01.117) 1:14:17.256 ******** 2026-01-30 07:03:01.914398 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914409 | orchestrator | 2026-01-30 07:03:01.914420 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-01-30 07:03:01.914459 | orchestrator | Friday 30 January 2026 07:02:24 +0000 (0:00:01.129) 1:14:18.385 ******** 2026-01-30 07:03:01.914470 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914481 | orchestrator | 2026-01-30 07:03:01.914492 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-01-30 07:03:01.914503 | orchestrator | Friday 30 January 2026 07:02:25 +0000 (0:00:01.150) 1:14:19.536 ******** 2026-01-30 07:03:01.914513 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-01-30 07:03:01.914524 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-01-30 07:03:01.914535 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914546 | orchestrator | 2026-01-30 07:03:01.914556 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-01-30 07:03:01.914567 | orchestrator | Friday 30 January 2026 07:02:27 +0000 (0:00:01.145) 1:14:20.682 ******** 2026-01-30 07:03:01.914578 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914588 | orchestrator | 2026-01-30 07:03:01.914599 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-01-30 07:03:01.914610 | orchestrator | Friday 30 January 2026 07:02:28 +0000 (0:00:01.132) 1:14:21.814 ******** 2026-01-30 07:03:01.914621 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914631 | orchestrator | 2026-01-30 07:03:01.914642 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-01-30 07:03:01.914652 | orchestrator | Friday 30 January 2026 07:02:29 +0000 (0:00:01.124) 1:14:22.938 ******** 2026-01-30 07:03:01.914663 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914673 | orchestrator | 2026-01-30 07:03:01.914684 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-01-30 07:03:01.914695 | orchestrator | Friday 30 January 2026 07:02:30 +0000 (0:00:01.150) 1:14:24.089 ******** 2026-01-30 07:03:01.914705 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:01.914716 | orchestrator | 2026-01-30 07:03:01.914727 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-01-30 07:03:01.914737 | orchestrator | 2026-01-30 07:03:01.914748 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-30 07:03:01.914759 | orchestrator | Friday 30 January 2026 07:02:32 +0000 (0:00:02.226) 1:14:26.316 ******** 2026-01-30 07:03:01.914769 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.914780 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.914791 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.914801 | orchestrator | 2026-01-30 07:03:01.914815 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-01-30 07:03:01.914834 | orchestrator | Friday 30 January 2026 07:02:34 +0000 (0:00:01.371) 1:14:27.688 ******** 2026-01-30 07:03:01.914852 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.914870 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.914914 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.914935 | orchestrator | 2026-01-30 07:03:01.914954 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-01-30 07:03:01.914966 | orchestrator | Friday 30 January 2026 07:02:35 +0000 (0:00:01.400) 1:14:29.088 ******** 2026-01-30 07:03:01.914976 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.914987 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.914998 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.915009 | orchestrator | 2026-01-30 07:03:01.915020 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-01-30 07:03:01.915031 | orchestrator | Friday 30 January 2026 07:02:36 +0000 (0:00:01.344) 1:14:30.433 ******** 2026-01-30 07:03:01.915042 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.915053 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.915063 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.915074 | orchestrator | 2026-01-30 07:03:01.915085 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-01-30 07:03:01.915105 | orchestrator | Friday 30 January 2026 07:02:38 +0000 (0:00:01.380) 1:14:31.814 ******** 2026-01-30 07:03:01.915116 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.915127 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.915137 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.915148 | orchestrator | 2026-01-30 07:03:01.915159 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-01-30 07:03:01.915169 | orchestrator | Friday 30 January 2026 07:02:39 +0000 (0:00:01.331) 1:14:33.145 ******** 2026-01-30 07:03:01.915180 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.915191 | orchestrator | skipping: [testbed-node-1] 2026-01-30 07:03:01.915201 | orchestrator | skipping: [testbed-node-2] 2026-01-30 07:03:01.915212 | orchestrator | 2026-01-30 07:03:01.915223 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-01-30 07:03:01.915233 | orchestrator | Friday 30 January 2026 07:02:41 +0000 (0:00:01.719) 1:14:34.865 ******** 2026-01-30 07:03:01.915244 | orchestrator | skipping: [testbed-node-0] 2026-01-30 07:03:01.915290 | orchestrator | 2026-01-30 07:03:01.915311 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-01-30 07:03:01.915325 | orchestrator | 2026-01-30 07:03:01.915336 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-30 07:03:01.915347 | orchestrator | Friday 30 January 2026 07:02:42 +0000 (0:00:01.578) 1:14:36.444 ******** 2026-01-30 07:03:01.915358 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915369 | orchestrator | 2026-01-30 07:03:01.915380 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-30 07:03:01.915391 | orchestrator | Friday 30 January 2026 07:02:44 +0000 (0:00:01.470) 1:14:37.915 ******** 2026-01-30 07:03:01.915402 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915413 | orchestrator | 2026-01-30 07:03:01.915424 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-01-30 07:03:01.915434 | orchestrator | Friday 30 January 2026 07:02:45 +0000 (0:00:01.129) 1:14:39.044 ******** 2026-01-30 07:03:01.915445 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915456 | orchestrator | 2026-01-30 07:03:01.915467 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-01-30 07:03:01.915478 | orchestrator | Friday 30 January 2026 07:02:46 +0000 (0:00:01.123) 1:14:40.168 ******** 2026-01-30 07:03:01.915489 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915500 | orchestrator | 2026-01-30 07:03:01.915511 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-01-30 07:03:01.915521 | orchestrator | Friday 30 January 2026 07:02:49 +0000 (0:00:03.028) 1:14:43.196 ******** 2026-01-30 07:03:01.915532 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915543 | orchestrator | 2026-01-30 07:03:01.915554 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-01-30 07:03:01.915565 | orchestrator | Friday 30 January 2026 07:02:52 +0000 (0:00:03.127) 1:14:46.323 ******** 2026-01-30 07:03:01.915576 | orchestrator | changed: [testbed-node-0] 2026-01-30 07:03:01.915587 | orchestrator | 2026-01-30 07:03:01.915598 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-01-30 07:03:01.915608 | orchestrator | 2026-01-30 07:03:01.915619 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-01-30 07:03:01.915630 | orchestrator | Friday 30 January 2026 07:02:54 +0000 (0:00:01.799) 1:14:48.123 ******** 2026-01-30 07:03:01.915641 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915652 | orchestrator | ok: [testbed-node-1] 2026-01-30 07:03:01.915663 | orchestrator | ok: [testbed-node-2] 2026-01-30 07:03:01.915673 | orchestrator | 2026-01-30 07:03:01.915684 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-01-30 07:03:01.915695 | orchestrator | Friday 30 January 2026 07:02:55 +0000 (0:00:01.440) 1:14:49.563 ******** 2026-01-30 07:03:01.915706 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915717 | orchestrator | 2026-01-30 07:03:01.915728 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-01-30 07:03:01.915747 | orchestrator | Friday 30 January 2026 07:02:58 +0000 (0:00:02.391) 1:14:51.954 ******** 2026-01-30 07:03:01.915758 | orchestrator | ok: [testbed-node-0] 2026-01-30 07:03:01.915769 | orchestrator | 2026-01-30 07:03:01.915780 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 07:03:01.915793 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-30 07:03:01.915814 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-01-30 07:03:01.915834 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-01-30 07:03:01.915852 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-01-30 07:03:01.915883 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-01-30 07:03:02.715085 | orchestrator | testbed-node-3 : ok=317  changed=20  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-01-30 07:03:02.715178 | orchestrator | testbed-node-4 : ok=302  changed=17  unreachable=0 failed=0 skipped=338  rescued=0 ignored=0 2026-01-30 07:03:02.715190 | orchestrator | testbed-node-5 : ok=308  changed=17  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-01-30 07:03:02.715199 | orchestrator | 2026-01-30 07:03:02.715207 | orchestrator | 2026-01-30 07:03:02.715215 | orchestrator | 2026-01-30 07:03:02.715224 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 07:03:02.715235 | orchestrator | Friday 30 January 2026 07:03:01 +0000 (0:00:03.536) 1:14:55.491 ******** 2026-01-30 07:03:02.715244 | orchestrator | =============================================================================== 2026-01-30 07:03:02.715296 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 77.50s 2026-01-30 07:03:02.715305 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.22s 2026-01-30 07:03:02.715313 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.88s 2026-01-30 07:03:02.715321 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.60s 2026-01-30 07:03:02.715330 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 33.07s 2026-01-30 07:03:02.715338 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.42s 2026-01-30 07:03:02.715347 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 31.94s 2026-01-30 07:03:02.715355 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 31.02s 2026-01-30 07:03:02.715363 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 28.37s 2026-01-30 07:03:02.715371 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.15s 2026-01-30 07:03:02.715379 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.99s 2026-01-30 07:03:02.715388 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.96s 2026-01-30 07:03:02.715396 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.85s 2026-01-30 07:03:02.715403 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.06s 2026-01-30 07:03:02.715411 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.67s 2026-01-30 07:03:02.715419 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.82s 2026-01-30 07:03:02.715427 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.79s 2026-01-30 07:03:02.715460 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.98s 2026-01-30 07:03:02.715469 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.80s 2026-01-30 07:03:02.715476 | orchestrator | Stop ceph mon ---------------------------------------------------------- 11.42s 2026-01-30 07:03:03.051009 | orchestrator | + osism apply cephclient 2026-01-30 07:03:05.158895 | orchestrator | 2026-01-30 07:03:05 | INFO  | Task d8df0337-9e66-488a-98a4-b0386ffe3f5f (cephclient) was prepared for execution. 2026-01-30 07:03:05.159012 | orchestrator | 2026-01-30 07:03:05 | INFO  | It takes a moment until task d8df0337-9e66-488a-98a4-b0386ffe3f5f (cephclient) has been started and output is visible here. 2026-01-30 07:03:32.925835 | orchestrator | 2026-01-30 07:03:32.925922 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-30 07:03:32.925931 | orchestrator | 2026-01-30 07:03:32.925937 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-30 07:03:32.925942 | orchestrator | Friday 30 January 2026 07:03:11 +0000 (0:00:02.194) 0:00:02.194 ******** 2026-01-30 07:03:32.925947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-30 07:03:32.925954 | orchestrator | 2026-01-30 07:03:32.925959 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-30 07:03:32.925964 | orchestrator | Friday 30 January 2026 07:03:13 +0000 (0:00:01.805) 0:00:04.000 ******** 2026-01-30 07:03:32.925970 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-30 07:03:32.925975 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-30 07:03:32.925981 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-30 07:03:32.925986 | orchestrator | 2026-01-30 07:03:32.925991 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-30 07:03:32.925996 | orchestrator | Friday 30 January 2026 07:03:16 +0000 (0:00:02.506) 0:00:06.506 ******** 2026-01-30 07:03:32.926001 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-30 07:03:32.926006 | orchestrator | 2026-01-30 07:03:32.926011 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-30 07:03:32.926058 | orchestrator | Friday 30 January 2026 07:03:18 +0000 (0:00:02.039) 0:00:08.546 ******** 2026-01-30 07:03:32.926063 | orchestrator | ok: [testbed-manager] 2026-01-30 07:03:32.926068 | orchestrator | 2026-01-30 07:03:32.926073 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-30 07:03:32.926078 | orchestrator | Friday 30 January 2026 07:03:20 +0000 (0:00:01.868) 0:00:10.414 ******** 2026-01-30 07:03:32.926083 | orchestrator | ok: [testbed-manager] 2026-01-30 07:03:32.926088 | orchestrator | 2026-01-30 07:03:32.926093 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-30 07:03:32.926098 | orchestrator | Friday 30 January 2026 07:03:21 +0000 (0:00:01.761) 0:00:12.176 ******** 2026-01-30 07:03:32.926103 | orchestrator | ok: [testbed-manager] 2026-01-30 07:03:32.926108 | orchestrator | 2026-01-30 07:03:32.926113 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-30 07:03:32.926118 | orchestrator | Friday 30 January 2026 07:03:23 +0000 (0:00:02.031) 0:00:14.207 ******** 2026-01-30 07:03:32.926123 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-30 07:03:32.926129 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-01-30 07:03:32.926134 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-30 07:03:32.926139 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-30 07:03:32.926143 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-30 07:03:32.926148 | orchestrator | 2026-01-30 07:03:32.926153 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-30 07:03:32.926158 | orchestrator | Friday 30 January 2026 07:03:28 +0000 (0:00:04.741) 0:00:18.948 ******** 2026-01-30 07:03:32.926180 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-30 07:03:32.926185 | orchestrator | 2026-01-30 07:03:32.926190 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-30 07:03:32.926195 | orchestrator | Friday 30 January 2026 07:03:30 +0000 (0:00:01.473) 0:00:20.422 ******** 2026-01-30 07:03:32.926200 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:32.926204 | orchestrator | 2026-01-30 07:03:32.926209 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-30 07:03:32.926214 | orchestrator | Friday 30 January 2026 07:03:31 +0000 (0:00:01.121) 0:00:21.544 ******** 2026-01-30 07:03:32.926219 | orchestrator | skipping: [testbed-manager] 2026-01-30 07:03:32.926261 | orchestrator | 2026-01-30 07:03:32.926267 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-30 07:03:32.926271 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-30 07:03:32.926277 | orchestrator | 2026-01-30 07:03:32.926282 | orchestrator | 2026-01-30 07:03:32.926287 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-30 07:03:32.926292 | orchestrator | Friday 30 January 2026 07:03:32 +0000 (0:00:01.442) 0:00:22.986 ******** 2026-01-30 07:03:32.926297 | orchestrator | =============================================================================== 2026-01-30 07:03:32.926301 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.74s 2026-01-30 07:03:32.926306 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.51s 2026-01-30 07:03:32.926311 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.04s 2026-01-30 07:03:32.926316 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.03s 2026-01-30 07:03:32.926320 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.87s 2026-01-30 07:03:32.926325 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.81s 2026-01-30 07:03:32.926330 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.76s 2026-01-30 07:03:32.926335 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.47s 2026-01-30 07:03:32.926340 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.44s 2026-01-30 07:03:32.926344 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.12s 2026-01-30 07:03:33.293651 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-01-30 07:03:33.293751 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-01-30 07:03:33.301944 | orchestrator | + set -e 2026-01-30 07:03:33.302060 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-30 07:03:33.302072 | orchestrator | ++ export INTERACTIVE=false 2026-01-30 07:03:33.302079 | orchestrator | ++ INTERACTIVE=false 2026-01-30 07:03:33.302188 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-30 07:03:33.302198 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-30 07:03:33.302204 | orchestrator | + source /opt/manager-vars.sh 2026-01-30 07:03:33.302210 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-30 07:03:33.302215 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-30 07:03:33.302220 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-30 07:03:33.302317 | orchestrator | ++ CEPH_VERSION=reef 2026-01-30 07:03:33.302362 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-30 07:03:33.302369 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-30 07:03:33.302375 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-30 07:03:33.302384 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-30 07:03:33.302456 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-30 07:03:33.302466 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-30 07:03:33.302472 | orchestrator | ++ export ARA=false 2026-01-30 07:03:33.302513 | orchestrator | ++ ARA=false 2026-01-30 07:03:33.302523 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-30 07:03:33.303458 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-30 07:03:33.303494 | orchestrator | ++ export TEMPEST=false 2026-01-30 07:03:33.303502 | orchestrator | ++ TEMPEST=false 2026-01-30 07:03:33.303508 | orchestrator | ++ export IS_ZUUL=true 2026-01-30 07:03:33.303515 | orchestrator | ++ IS_ZUUL=true 2026-01-30 07:03:33.303522 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 07:03:33.303552 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.182 2026-01-30 07:03:33.303577 | orchestrator | ++ export EXTERNAL_API=false 2026-01-30 07:03:33.303594 | orchestrator | ++ EXTERNAL_API=false 2026-01-30 07:03:33.303603 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-30 07:03:33.303612 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-30 07:03:33.303621 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-30 07:03:33.303631 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-30 07:03:33.303638 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-30 07:03:33.303647 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-30 07:03:33.303657 | orchestrator | ++ export RABBITMQ3TO4=true 2026-01-30 07:03:33.303665 | orchestrator | ++ RABBITMQ3TO4=true 2026-01-30 07:03:33.303674 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-01-30 07:03:33.304212 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-01-30 07:03:33.310272 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-01-30 07:03:33.310330 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-01-30 07:03:33.310343 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-30 07:03:33.310353 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-01-30 07:03:53.047441 | orchestrator | 2026-01-30 07:03:53 | ERROR  | Unable to get ansible vault password 2026-01-30 07:03:53.047554 | orchestrator | 2026-01-30 07:03:53 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-30 07:03:53.047571 | orchestrator | 2026-01-30 07:03:53 | ERROR  | Dropping encrypted entries 2026-01-30 07:03:53.091871 | orchestrator | 2026-01-30 07:03:53 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-01-30 07:03:53.092813 | orchestrator | 2026-01-30 07:03:53 | INFO  | Kolla configuration check passed 2026-01-30 07:03:53.285163 | orchestrator | 2026-01-30 07:03:53 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-01-30 07:03:53.310257 | orchestrator | 2026-01-30 07:03:53 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-01-30 07:03:53.591714 | orchestrator | + osism migrate rabbitmq3to4 list 2026-01-30 07:04:13.948747 | orchestrator | 2026-01-30 07:04:13 | ERROR  | Unable to get ansible vault password 2026-01-30 07:04:13.948888 | orchestrator | 2026-01-30 07:04:13 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-01-30 07:04:13.948906 | orchestrator | 2026-01-30 07:04:13 | ERROR  | Dropping encrypted entries 2026-01-30 07:04:13.986786 | orchestrator | 2026-01-30 07:04:13 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-01-30 07:04:14.139001 | orchestrator | 2026-01-30 07:04:14 | INFO  | Found 205 classic queue(s) in vhost '/': 2026-01-30 07:04:14.139106 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-01-30 07:04:14.139131 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-01-30 07:04:14.139146 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-01-30 07:04:14.139159 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-01-30 07:04:14.139171 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican.workers_fanout_3346cb8eeb0a497a9f216c43b5ef26d7 (vhost: /, messages: 0) 2026-01-30 07:04:14.139184 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican.workers_fanout_a34fb7ce13f24ecc8c2b28c528ee9c9f (vhost: /, messages: 0) 2026-01-30 07:04:14.139231 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican.workers_fanout_a6e447ac535d493695ba67cc759b41ed (vhost: /, messages: 0) 2026-01-30 07:04:14.139290 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-01-30 07:04:14.139303 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central (vhost: /, messages: 0) 2026-01-30 07:04:14.139418 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.139442 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.139453 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.139465 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_6e84d9b81a674d93b5280c7cfc747acb (vhost: /, messages: 0) 2026-01-30 07:04:14.140001 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_a15de3ccf43e4450a7b7edec8ed655d9 (vhost: /, messages: 0) 2026-01-30 07:04:14.140043 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_c34862166dde4bda9f88517b1b6ff273 (vhost: /, messages: 0) 2026-01-30 07:04:14.140062 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_d6c9353b0f24496cb7bc77defc05309f (vhost: /, messages: 0) 2026-01-30 07:04:14.140080 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_ea51df67e6a84f55a18f3420281f6cc8 (vhost: /, messages: 0) 2026-01-30 07:04:14.140272 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - central_fanout_f64d58b4a7be4dde872e48dc881391bb (vhost: /, messages: 0) 2026-01-30 07:04:14.140289 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-01-30 07:04:14.140317 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.140329 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.140433 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.140452 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup_fanout_72c3921aa13640b0b0a9f7fd5a89a006 (vhost: /, messages: 0) 2026-01-30 07:04:14.140463 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup_fanout_a0c211ed3ca14b55867276a71517dbe0 (vhost: /, messages: 0) 2026-01-30 07:04:14.140474 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-backup_fanout_b3146ba15410449b97bc49f1a70cc73a (vhost: /, messages: 0) 2026-01-30 07:04:14.140639 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-01-30 07:04:14.140671 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.140690 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.140957 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.140981 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler_fanout_0734da7d2ca74c4f8c1bb406cb7a8853 (vhost: /, messages: 0) 2026-01-30 07:04:14.140992 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler_fanout_307a9397ccf744148d3288dc3522d2fc (vhost: /, messages: 0) 2026-01-30 07:04:14.141094 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-scheduler_fanout_8a6040feb47345b7a067d2f77f5a4cc4 (vhost: /, messages: 0) 2026-01-30 07:04:14.141112 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-01-30 07:04:14.141900 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-01-30 07:04:14.142256 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.142364 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_e9729a0e1a9d46e692568f174025f0a3 (vhost: /, messages: 0) 2026-01-30 07:04:14.142381 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-01-30 07:04:14.142391 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.142401 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_30a80bf10c0748b0b319b9214de0c418 (vhost: /, messages: 0) 2026-01-30 07:04:14.142423 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-01-30 07:04:14.142433 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.142443 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_340ac17c80d54fc6b509eb4f9d8f188e (vhost: /, messages: 0) 2026-01-30 07:04:14.142526 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume_fanout_1a6fa41c69874033a12d43ab562e16ed (vhost: /, messages: 0) 2026-01-30 07:04:14.142605 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume_fanout_72b6666e56914e7e91b163d198691338 (vhost: /, messages: 0) 2026-01-30 07:04:14.142616 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - cinder-volume_fanout_992ff18a71954c66b1d9b1e08f805a62 (vhost: /, messages: 0) 2026-01-30 07:04:14.142626 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute (vhost: /, messages: 0) 2026-01-30 07:04:14.142708 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-01-30 07:04:14.142724 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-01-30 07:04:14.142734 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-01-30 07:04:14.142748 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute_fanout_0cd9a71fec9448babf1adba613406de3 (vhost: /, messages: 0) 2026-01-30 07:04:14.142758 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute_fanout_0e4479dc309a4db59e83b352dc2bcdc1 (vhost: /, messages: 0) 2026-01-30 07:04:14.142866 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - compute_fanout_2753a03f80ea475e90156d557a47a9c8 (vhost: /, messages: 0) 2026-01-30 07:04:14.143004 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor (vhost: /, messages: 0) 2026-01-30 07:04:14.143022 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.143032 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.143155 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.143460 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_33e416b4682d4a8d95187f99a8913134 (vhost: /, messages: 0) 2026-01-30 07:04:14.143485 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_4d99a0c2bd0742e99618f2f4fb1661fb (vhost: /, messages: 0) 2026-01-30 07:04:14.143495 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_51e96882b2394b6f8d040451c1e7e3f8 (vhost: /, messages: 0) 2026-01-30 07:04:14.143624 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_69c62f2f65b44af98597e5e301d0b7cc (vhost: /, messages: 0) 2026-01-30 07:04:14.143732 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_8b52956462da4f768a328af1c215b66e (vhost: /, messages: 0) 2026-01-30 07:04:14.143760 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - conductor_fanout_f2eda4c9b72145a18a30c5293d6882d6 (vhost: /, messages: 0) 2026-01-30 07:04:14.143870 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - event.sample (vhost: /, messages: 9) 2026-01-30 07:04:14.143883 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-01-30 07:04:14.144019 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor.3m2muukegfhj (vhost: /, messages: 0) 2026-01-30 07:04:14.144037 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor.ghdsdw4zec7h (vhost: /, messages: 0) 2026-01-30 07:04:14.144047 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor.oall33bbdnxd (vhost: /, messages: 0) 2026-01-30 07:04:14.144173 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_06a0e6d583c94a50b9f8950bf6ef5af6 (vhost: /, messages: 0) 2026-01-30 07:04:14.145147 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_43786040747f4f2d869d3dedc95fa3b0 (vhost: /, messages: 0) 2026-01-30 07:04:14.145217 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_55fff5fadf7c4f2bbae1583ea58dce42 (vhost: /, messages: 0) 2026-01-30 07:04:14.145226 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_90b1b775f06f49428c10c584fe7f925c (vhost: /, messages: 0) 2026-01-30 07:04:14.145232 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_a209556f9f5142029c2829ffaa4b42d7 (vhost: /, messages: 0) 2026-01-30 07:04:14.145238 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_a4b9f539e1c445758401eaaa3214a946 (vhost: /, messages: 0) 2026-01-30 07:04:14.145292 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_bd34b71b56a845cdb6b6d5a80194e3c3 (vhost: /, messages: 0) 2026-01-30 07:04:14.145305 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_fa1de9ebf3a24d65b961053c84a7e2af (vhost: /, messages: 0) 2026-01-30 07:04:14.145314 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - magnum-conductor_fanout_fba2c7d345744596a23eb1e1416d8d67 (vhost: /, messages: 0) 2026-01-30 07:04:14.145324 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-01-30 07:04:14.145334 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.145351 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.145359 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.145364 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data_fanout_022ab5221f9049e4a698725695e8ce5e (vhost: /, messages: 0) 2026-01-30 07:04:14.145625 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data_fanout_8d70f5d3f2d04bee90f49ccfdf23793f (vhost: /, messages: 0) 2026-01-30 07:04:14.145636 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-data_fanout_9730f026c9d54145a22d4462e5481f05 (vhost: /, messages: 0) 2026-01-30 07:04:14.145642 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-01-30 07:04:14.145647 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.145729 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.145758 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.145845 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler_fanout_2f4cf494688d49bdbd26bda726bbdf2f (vhost: /, messages: 0) 2026-01-30 07:04:14.145857 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler_fanout_6f0a00a0ccdd4ae8a59eb929365fd304 (vhost: /, messages: 0) 2026-01-30 07:04:14.146251 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-scheduler_fanout_afa76fe667904b259bc5c21bd2e63c84 (vhost: /, messages: 0) 2026-01-30 07:04:14.146273 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-01-30 07:04:14.146278 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-01-30 07:04:14.146426 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-01-30 07:04:14.146442 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-01-30 07:04:14.146452 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share_fanout_a50a52a30c6f461fa62be1d6e81bd027 (vhost: /, messages: 0) 2026-01-30 07:04:14.146702 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share_fanout_c7360683b0a9466a908a9eddcee01ad0 (vhost: /, messages: 0) 2026-01-30 07:04:14.146722 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - manila-share_fanout_e4733fda8d1a4c01b04388d51c1e42ae (vhost: /, messages: 0) 2026-01-30 07:04:14.146731 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-01-30 07:04:14.146815 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-01-30 07:04:14.146821 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-01-30 07:04:14.146827 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-01-30 07:04:14.146832 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-01-30 07:04:14.147086 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-01-30 07:04:14.147102 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-01-30 07:04:14.147232 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-01-30 07:04:14.147243 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.147256 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.147508 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.147530 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2_fanout_64aa1701e83b451da8183e09e666bb72 (vhost: /, messages: 0) 2026-01-30 07:04:14.147592 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2_fanout_97d9da3705ff43de92f2c7110d1a5f23 (vhost: /, messages: 0) 2026-01-30 07:04:14.147602 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - octavia_provisioning_v2_fanout_b7471d29dc2c42cda0005a7f76465fd4 (vhost: /, messages: 0) 2026-01-30 07:04:14.147610 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer (vhost: /, messages: 0) 2026-01-30 07:04:14.147618 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.147836 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.147854 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.147863 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_74062ce3e80d4e81bd1f12015168b22c (vhost: /, messages: 0) 2026-01-30 07:04:14.148163 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_88dd99b0748e4731b1b918b0a83660bc (vhost: /, messages: 0) 2026-01-30 07:04:14.148187 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_9dd8204747af48e78e5e423ab913c8dc (vhost: /, messages: 0) 2026-01-30 07:04:14.148373 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_a4b692386c1e430d9946e913200894f4 (vhost: /, messages: 0) 2026-01-30 07:04:14.148389 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_a558d2c4b1c943d8bedc972ca023914b (vhost: /, messages: 0) 2026-01-30 07:04:14.148397 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - producer_fanout_aefb410ee9a2481085cb9df9a6edc1f7 (vhost: /, messages: 0) 2026-01-30 07:04:14.148457 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-01-30 07:04:14.148481 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.148491 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.149175 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.149282 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_1649b82ab85145df8380242629446e81 (vhost: /, messages: 0) 2026-01-30 07:04:14.149295 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_2aed46f4b62a417b9787287eabe5c4a2 (vhost: /, messages: 0) 2026-01-30 07:04:14.149306 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_408ad3ecb4c140a7adbde96f1586982d (vhost: /, messages: 0) 2026-01-30 07:04:14.149316 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_4a44dc14418947ddb2f7ceb1248de02e (vhost: /, messages: 0) 2026-01-30 07:04:14.149326 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_51d698c3bebc48a38e79ba3e1d832f6d (vhost: /, messages: 0) 2026-01-30 07:04:14.149335 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_6bf150633f9347f398582a0ccd123c5f (vhost: /, messages: 0) 2026-01-30 07:04:14.149418 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_8b27d1f8fb904abc9d50cf3a4af243f3 (vhost: /, messages: 0) 2026-01-30 07:04:14.149506 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_e4f83ae9d0344ac4a528ae3c419b613a (vhost: /, messages: 0) 2026-01-30 07:04:14.149524 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-plugin_fanout_f244fec4eaef44248d8ad6226102e836 (vhost: /, messages: 0) 2026-01-30 07:04:14.149550 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-01-30 07:04:14.149575 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.149592 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.149607 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.149622 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_0fabc9865b64443d95d3a47b3c09ca5c (vhost: /, messages: 0) 2026-01-30 07:04:14.149638 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_371a81befcf247e198d3d5fb8ca1324d (vhost: /, messages: 0) 2026-01-30 07:04:14.149790 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_53f6318ae85e405aac2a4c29dea669cf (vhost: /, messages: 0) 2026-01-30 07:04:14.149807 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_61d214f8b3114dfd9f04f3ddaa05c16b (vhost: /, messages: 0) 2026-01-30 07:04:14.149817 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_6ba06468003f4acd96a0c35c051f412e (vhost: /, messages: 0) 2026-01-30 07:04:14.150007 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_7192b99bed9c414db9fb3b17403da7ea (vhost: /, messages: 0) 2026-01-30 07:04:14.150166 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_779e42daec1d4e8982390ed0de1b83eb (vhost: /, messages: 0) 2026-01-30 07:04:14.150182 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_7dc0edcd4b974e57aceb112254993549 (vhost: /, messages: 0) 2026-01-30 07:04:14.150245 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_9af3f46680ce4db5b85447ada0454b44 (vhost: /, messages: 0) 2026-01-30 07:04:14.150258 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_9d8b270facbe476d9082f4d8b7f37bd0 (vhost: /, messages: 0) 2026-01-30 07:04:14.150371 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_9fc7185dfe884075b98b844c6467854e (vhost: /, messages: 0) 2026-01-30 07:04:14.150387 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_b1c668082e6344f181fd0df3126d2a20 (vhost: /, messages: 0) 2026-01-30 07:04:14.150475 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_b297e437780e47a9ac17307b2ff15c98 (vhost: /, messages: 0) 2026-01-30 07:04:14.150486 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_b78c11a29ead454c876988b846ce64f5 (vhost: /, messages: 0) 2026-01-30 07:04:14.150496 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_bcc96391114f4f95a03fcb7592acb2dd (vhost: /, messages: 0) 2026-01-30 07:04:14.150995 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_d9738dfb7ec94fbd8aab0a9af56dd960 (vhost: /, messages: 0) 2026-01-30 07:04:14.151038 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_e4479847276844229f583e1e0bbee6ae (vhost: /, messages: 0) 2026-01-30 07:04:14.151102 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-reports-plugin_fanout_fb00772eddf64aa8ae1ab95c596bc611 (vhost: /, messages: 0) 2026-01-30 07:04:14.151113 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-01-30 07:04:14.151121 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.151130 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.151141 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.151146 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_0b96621fe12c42e4bffe10409828499d (vhost: /, messages: 0) 2026-01-30 07:04:14.151152 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_0bfeff1d5eaf4f98bd6fb41c19ab7bad (vhost: /, messages: 0) 2026-01-30 07:04:14.151157 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_1ae8bd7916294367a9de6df5db949ba2 (vhost: /, messages: 0) 2026-01-30 07:04:14.151162 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_303bd971f3cb4b6baeacd2540fb1cb04 (vhost: /, messages: 0) 2026-01-30 07:04:14.151427 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_527197315b2a4f969d715b4cac2cd557 (vhost: /, messages: 0) 2026-01-30 07:04:14.151446 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_54f963d2868344c5994c3bb1bbd25557 (vhost: /, messages: 0) 2026-01-30 07:04:14.151451 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_7eb8ffa369ac473282869e301223af5b (vhost: /, messages: 0) 2026-01-30 07:04:14.151459 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_d2052f6236674861b99be795b537ec16 (vhost: /, messages: 0) 2026-01-30 07:04:14.151587 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - q-server-resource-versions_fanout_e13681125af7463fbe5e96b65586fdf3 (vhost: /, messages: 0) 2026-01-30 07:04:14.151597 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_077228b32f7b44c8afaebb096e5d2301 (vhost: /, messages: 0) 2026-01-30 07:04:14.151603 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_08acacfe55a1430a854a36eeb06024b4 (vhost: /, messages: 0) 2026-01-30 07:04:14.151756 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_0adec211b1fd41ae8fc4b9d8d163d9f0 (vhost: /, messages: 0) 2026-01-30 07:04:14.151765 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_0ca6a75010594962a9a804aa21bf3647 (vhost: /, messages: 0) 2026-01-30 07:04:14.151770 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_150e2304ee1040c2bda7679efa7385bc (vhost: /, messages: 0) 2026-01-30 07:04:14.151775 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_75196fcbdb7e41df98c4fc844de0deb5 (vhost: /, messages: 0) 2026-01-30 07:04:14.151848 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_895ec44072e347c4815c644d075350a0 (vhost: /, messages: 0) 2026-01-30 07:04:14.151857 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_8b688a34c4ad402784811f5361844b00 (vhost: /, messages: 0) 2026-01-30 07:04:14.152096 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_91c48d2d6da64f93a97d2bb48d1fb121 (vhost: /, messages: 0) 2026-01-30 07:04:14.152161 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_9a9079442a4046d68257bfb9d1e0a6f7 (vhost: /, messages: 0) 2026-01-30 07:04:14.152172 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_9bb2e7f8a5f54093b7084087d4556e1c (vhost: /, messages: 0) 2026-01-30 07:04:14.152177 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_a1614c60e6724aa2b9c6200b04962ea5 (vhost: /, messages: 0) 2026-01-30 07:04:14.152271 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_cb2d81a559a44c1f8daf5f42e8ebaf0b (vhost: /, messages: 0) 2026-01-30 07:04:14.152458 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_cf9eabb1f5784cce809aa20a5981771e (vhost: /, messages: 0) 2026-01-30 07:04:14.152467 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_dee87606a6da45bcb25e6cc8ab9b09e8 (vhost: /, messages: 0) 2026-01-30 07:04:14.152473 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - reply_eb63769557fa4b0e8497864b6499dbff (vhost: /, messages: 0) 2026-01-30 07:04:14.152478 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-01-30 07:04:14.152510 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.152799 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.152855 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.152862 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_370bab412ccc4dcc9235256362ea09f7 (vhost: /, messages: 0) 2026-01-30 07:04:14.152878 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_3dfea00941cf41dc9d31ffa3d97faa1c (vhost: /, messages: 0) 2026-01-30 07:04:14.152886 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_a67e4d3d5b514c46a5c7873029d937b8 (vhost: /, messages: 0) 2026-01-30 07:04:14.152892 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_adbcc30dba924a8da22dc37c56629c2e (vhost: /, messages: 0) 2026-01-30 07:04:14.153056 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_b5e1309057374126a0aa6d5ec9a8466d (vhost: /, messages: 0) 2026-01-30 07:04:14.153169 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - scheduler_fanout_c09c1375277c4ecb9fd2fc47fa26c16b (vhost: /, messages: 0) 2026-01-30 07:04:14.153183 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker (vhost: /, messages: 0) 2026-01-30 07:04:14.153229 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-01-30 07:04:14.153238 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-01-30 07:04:14.153243 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-01-30 07:04:14.153391 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_2c6c6d1946cc41d182a5d363dfe6561e (vhost: /, messages: 0) 2026-01-30 07:04:14.153401 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_45ff6bb8d0964a72a180b5af5618c314 (vhost: /, messages: 0) 2026-01-30 07:04:14.153407 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_55c5849f12f343df9c56dee80f6d0abe (vhost: /, messages: 0) 2026-01-30 07:04:14.153566 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_63bc6dcd3bdd48658b92c421a58a1915 (vhost: /, messages: 0) 2026-01-30 07:04:14.153575 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_92bf3a812e6249e4af5ad8b875573e6a (vhost: /, messages: 0) 2026-01-30 07:04:14.153643 | orchestrator | 2026-01-30 07:04:14 | INFO  |  - worker_fanout_ada91e0e90bb40cdaf73b149b9867c7e (vhost: /, messages: 0) 2026-01-30 07:04:14.481967 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-01-30 07:04:16.490389 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-01-30 07:04:16.490472 | orchestrator | [--no-close-connections] [--quorum] 2026-01-30 07:04:16.490483 | orchestrator | [--vhost VHOST] 2026-01-30 07:04:16.490490 | orchestrator | [{list,delete,prepare,check}] 2026-01-30 07:04:16.490497 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-01-30 07:04:16.490506 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-01-30 07:04:17.205261 | orchestrator | ERROR 2026-01-30 07:04:17.205487 | orchestrator | { 2026-01-30 07:04:17.206092 | orchestrator | "delta": "2:01:31.275431", 2026-01-30 07:04:17.206150 | orchestrator | "end": "2026-01-30 07:04:16.814438", 2026-01-30 07:04:17.206179 | orchestrator | "msg": "non-zero return code", 2026-01-30 07:04:17.206206 | orchestrator | "rc": 2, 2026-01-30 07:04:17.206226 | orchestrator | "start": "2026-01-30 05:02:45.539007" 2026-01-30 07:04:17.206245 | orchestrator | } failure 2026-01-30 07:04:17.465930 | 2026-01-30 07:04:17.466108 | PLAY RECAP 2026-01-30 07:04:17.466171 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-01-30 07:04:17.466196 | 2026-01-30 07:04:17.708267 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-01-30 07:04:17.711917 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-30 07:04:18.472737 | 2026-01-30 07:04:18.472902 | PLAY [Post output play] 2026-01-30 07:04:18.488727 | 2026-01-30 07:04:18.488867 | LOOP [stage-output : Register sources] 2026-01-30 07:04:18.560105 | 2026-01-30 07:04:18.560637 | TASK [stage-output : Check sudo] 2026-01-30 07:04:19.419573 | orchestrator | sudo: a password is required 2026-01-30 07:04:19.604295 | orchestrator | ok: Runtime: 0:00:00.013835 2026-01-30 07:04:19.619400 | 2026-01-30 07:04:19.619558 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-30 07:04:19.656327 | 2026-01-30 07:04:19.656615 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-30 07:04:19.723701 | orchestrator | ok 2026-01-30 07:04:19.731629 | 2026-01-30 07:04:19.731755 | LOOP [stage-output : Ensure target folders exist] 2026-01-30 07:04:20.216370 | orchestrator | ok: "docs" 2026-01-30 07:04:20.216734 | 2026-01-30 07:04:20.513402 | orchestrator | ok: "artifacts" 2026-01-30 07:04:20.794597 | orchestrator | ok: "logs" 2026-01-30 07:04:20.815648 | 2026-01-30 07:04:20.815826 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-30 07:04:20.853861 | 2026-01-30 07:04:20.854162 | TASK [stage-output : Make all log files readable] 2026-01-30 07:04:21.155692 | orchestrator | ok 2026-01-30 07:04:21.165040 | 2026-01-30 07:04:21.165223 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-30 07:04:21.198908 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:21.210425 | 2026-01-30 07:04:21.210560 | TASK [stage-output : Discover log files for compression] 2026-01-30 07:04:21.233975 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:21.246298 | 2026-01-30 07:04:21.246429 | LOOP [stage-output : Archive everything from logs] 2026-01-30 07:04:21.291565 | 2026-01-30 07:04:21.291742 | PLAY [Post cleanup play] 2026-01-30 07:04:21.300362 | 2026-01-30 07:04:21.300451 | TASK [Set cloud fact (Zuul deployment)] 2026-01-30 07:04:21.357835 | orchestrator | ok 2026-01-30 07:04:21.368746 | 2026-01-30 07:04:21.368843 | TASK [Set cloud fact (local deployment)] 2026-01-30 07:04:21.401929 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:21.418618 | 2026-01-30 07:04:21.418848 | TASK [Clean the cloud environment] 2026-01-30 07:04:22.061456 | orchestrator | 2026-01-30 07:04:22 - clean up servers 2026-01-30 07:04:22.816114 | orchestrator | 2026-01-30 07:04:22 - testbed-manager 2026-01-30 07:04:22.900962 | orchestrator | 2026-01-30 07:04:22 - testbed-node-5 2026-01-30 07:04:22.993241 | orchestrator | 2026-01-30 07:04:22 - testbed-node-2 2026-01-30 07:04:23.078542 | orchestrator | 2026-01-30 07:04:23 - testbed-node-0 2026-01-30 07:04:23.172654 | orchestrator | 2026-01-30 07:04:23 - testbed-node-4 2026-01-30 07:04:23.265357 | orchestrator | 2026-01-30 07:04:23 - testbed-node-1 2026-01-30 07:04:23.362311 | orchestrator | 2026-01-30 07:04:23 - testbed-node-3 2026-01-30 07:04:23.447743 | orchestrator | 2026-01-30 07:04:23 - clean up keypairs 2026-01-30 07:04:23.463757 | orchestrator | 2026-01-30 07:04:23 - testbed 2026-01-30 07:04:23.486801 | orchestrator | 2026-01-30 07:04:23 - wait for servers to be gone 2026-01-30 07:04:34.371587 | orchestrator | 2026-01-30 07:04:34 - clean up ports 2026-01-30 07:04:34.554621 | orchestrator | 2026-01-30 07:04:34 - 02616041-a237-47a4-9bf2-2bd6b34ae565 2026-01-30 07:04:34.855922 | orchestrator | 2026-01-30 07:04:34 - 4cbde05d-6e33-43b8-9dee-307e12fdd1d6 2026-01-30 07:04:35.185102 | orchestrator | 2026-01-30 07:04:35 - 7259f922-5c56-421a-8610-b719e2e20dcf 2026-01-30 07:04:35.586569 | orchestrator | 2026-01-30 07:04:35 - 76facdec-799e-4c90-9158-f1be3dfe94ae 2026-01-30 07:04:35.806231 | orchestrator | 2026-01-30 07:04:35 - 85d3f859-aa7b-4f78-b798-e492b6c24ec2 2026-01-30 07:04:36.013547 | orchestrator | 2026-01-30 07:04:36 - 942c0f1d-6b94-4290-9d42-e962339048e1 2026-01-30 07:04:36.221149 | orchestrator | 2026-01-30 07:04:36 - ee55e152-7e5e-40c4-9384-a2ee2e42f4bf 2026-01-30 07:04:36.481917 | orchestrator | 2026-01-30 07:04:36 - clean up volumes 2026-01-30 07:04:36.619021 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-manager-base 2026-01-30 07:04:36.659045 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-5-node-base 2026-01-30 07:04:36.699919 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-0-node-base 2026-01-30 07:04:36.750630 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-1-node-base 2026-01-30 07:04:36.795821 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-3-node-base 2026-01-30 07:04:36.839532 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-4-node-base 2026-01-30 07:04:36.885965 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-2-node-base 2026-01-30 07:04:36.927224 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-3-node-3 2026-01-30 07:04:36.972521 | orchestrator | 2026-01-30 07:04:36 - testbed-volume-4-node-4 2026-01-30 07:04:37.016974 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-0-node-3 2026-01-30 07:04:37.059334 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-6-node-3 2026-01-30 07:04:37.105957 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-2-node-5 2026-01-30 07:04:37.155040 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-7-node-4 2026-01-30 07:04:37.211091 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-1-node-4 2026-01-30 07:04:37.264254 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-5-node-5 2026-01-30 07:04:37.305231 | orchestrator | 2026-01-30 07:04:37 - testbed-volume-8-node-5 2026-01-30 07:04:37.343267 | orchestrator | 2026-01-30 07:04:37 - disconnect routers 2026-01-30 07:04:37.411938 | orchestrator | 2026-01-30 07:04:37 - testbed 2026-01-30 07:04:38.970433 | orchestrator | 2026-01-30 07:04:38 - clean up subnets 2026-01-30 07:04:39.047818 | orchestrator | 2026-01-30 07:04:39 - subnet-testbed-management 2026-01-30 07:04:39.221738 | orchestrator | 2026-01-30 07:04:39 - clean up networks 2026-01-30 07:04:39.399715 | orchestrator | 2026-01-30 07:04:39 - net-testbed-management 2026-01-30 07:04:39.724640 | orchestrator | 2026-01-30 07:04:39 - clean up security groups 2026-01-30 07:04:39.782836 | orchestrator | 2026-01-30 07:04:39 - testbed-node 2026-01-30 07:04:39.952139 | orchestrator | 2026-01-30 07:04:39 - testbed-management 2026-01-30 07:04:40.070125 | orchestrator | 2026-01-30 07:04:40 - clean up floating ips 2026-01-30 07:04:40.106152 | orchestrator | 2026-01-30 07:04:40 - 81.163.193.182 2026-01-30 07:04:40.500712 | orchestrator | 2026-01-30 07:04:40 - clean up routers 2026-01-30 07:04:40.601146 | orchestrator | 2026-01-30 07:04:40 - testbed 2026-01-30 07:04:41.966520 | orchestrator | ok: Runtime: 0:00:19.771157 2026-01-30 07:04:41.971333 | 2026-01-30 07:04:41.971499 | PLAY RECAP 2026-01-30 07:04:41.971622 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-30 07:04:41.971686 | 2026-01-30 07:04:42.112120 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-30 07:04:42.114663 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-30 07:04:42.838606 | 2026-01-30 07:04:42.838765 | PLAY [Cleanup play] 2026-01-30 07:04:42.854906 | 2026-01-30 07:04:42.855061 | TASK [Set cloud fact (Zuul deployment)] 2026-01-30 07:04:42.922294 | orchestrator | ok 2026-01-30 07:04:42.931868 | 2026-01-30 07:04:42.932022 | TASK [Set cloud fact (local deployment)] 2026-01-30 07:04:42.967098 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:42.982540 | 2026-01-30 07:04:42.982675 | TASK [Clean the cloud environment] 2026-01-30 07:04:44.178803 | orchestrator | 2026-01-30 07:04:44 - clean up servers 2026-01-30 07:04:44.649108 | orchestrator | 2026-01-30 07:04:44 - clean up keypairs 2026-01-30 07:04:44.667804 | orchestrator | 2026-01-30 07:04:44 - wait for servers to be gone 2026-01-30 07:04:44.715716 | orchestrator | 2026-01-30 07:04:44 - clean up ports 2026-01-30 07:04:44.802528 | orchestrator | 2026-01-30 07:04:44 - clean up volumes 2026-01-30 07:04:44.865951 | orchestrator | 2026-01-30 07:04:44 - disconnect routers 2026-01-30 07:04:44.895618 | orchestrator | 2026-01-30 07:04:44 - clean up subnets 2026-01-30 07:04:44.915737 | orchestrator | 2026-01-30 07:04:44 - clean up networks 2026-01-30 07:04:45.112497 | orchestrator | 2026-01-30 07:04:45 - clean up security groups 2026-01-30 07:04:45.149710 | orchestrator | 2026-01-30 07:04:45 - clean up floating ips 2026-01-30 07:04:45.179098 | orchestrator | 2026-01-30 07:04:45 - clean up routers 2026-01-30 07:04:45.519659 | orchestrator | ok: Runtime: 0:00:01.434423 2026-01-30 07:04:45.523429 | 2026-01-30 07:04:45.523598 | PLAY RECAP 2026-01-30 07:04:45.523732 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-30 07:04:45.523802 | 2026-01-30 07:04:45.648405 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-30 07:04:45.650871 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-30 07:04:46.428250 | 2026-01-30 07:04:46.428420 | PLAY [Base post-fetch] 2026-01-30 07:04:46.444480 | 2026-01-30 07:04:46.444615 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-30 07:04:46.500827 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:46.515957 | 2026-01-30 07:04:46.516180 | TASK [fetch-output : Set log path for single node] 2026-01-30 07:04:46.555916 | orchestrator | ok 2026-01-30 07:04:46.564267 | 2026-01-30 07:04:46.564396 | LOOP [fetch-output : Ensure local output dirs] 2026-01-30 07:04:47.047899 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/logs" 2026-01-30 07:04:47.311514 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/artifacts" 2026-01-30 07:04:47.585656 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/1d7e04a1686140da853285cbef7032ad/work/docs" 2026-01-30 07:04:47.609393 | 2026-01-30 07:04:47.609593 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-30 07:04:48.569652 | orchestrator | changed: .d..t...... ./ 2026-01-30 07:04:48.569993 | orchestrator | changed: All items complete 2026-01-30 07:04:48.570069 | 2026-01-30 07:04:49.311113 | orchestrator | changed: .d..t...... ./ 2026-01-30 07:04:50.034022 | orchestrator | changed: .d..t...... ./ 2026-01-30 07:04:50.060645 | 2026-01-30 07:04:50.060811 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-30 07:04:50.096942 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:50.101068 | orchestrator | skipping: Conditional result was False 2026-01-30 07:04:50.119596 | 2026-01-30 07:04:50.119694 | PLAY RECAP 2026-01-30 07:04:50.119768 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-30 07:04:50.119806 | 2026-01-30 07:04:50.277383 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-30 07:04:50.279825 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-30 07:04:50.998529 | 2026-01-30 07:04:50.998689 | PLAY [Base post] 2026-01-30 07:04:51.013506 | 2026-01-30 07:04:51.013657 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-30 07:04:52.000863 | orchestrator | changed 2026-01-30 07:04:52.012015 | 2026-01-30 07:04:52.012197 | PLAY RECAP 2026-01-30 07:04:52.012274 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-30 07:04:52.012352 | 2026-01-30 07:04:52.139281 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-30 07:04:52.141653 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-30 07:04:52.967184 | 2026-01-30 07:04:52.967364 | PLAY [Base post-logs] 2026-01-30 07:04:52.978261 | 2026-01-30 07:04:52.978404 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-30 07:04:53.444319 | localhost | changed 2026-01-30 07:04:53.460337 | 2026-01-30 07:04:53.460509 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-30 07:04:53.499490 | localhost | ok 2026-01-30 07:04:53.506136 | 2026-01-30 07:04:53.506302 | TASK [Set zuul-log-path fact] 2026-01-30 07:04:53.524067 | localhost | ok 2026-01-30 07:04:53.536377 | 2026-01-30 07:04:53.536517 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-30 07:04:53.573546 | localhost | ok 2026-01-30 07:04:53.579346 | 2026-01-30 07:04:53.579498 | TASK [upload-logs : Create log directories] 2026-01-30 07:04:54.109512 | localhost | changed 2026-01-30 07:04:54.114996 | 2026-01-30 07:04:54.115197 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-30 07:04:54.608063 | localhost -> localhost | ok: Runtime: 0:00:00.008422 2026-01-30 07:04:54.617346 | 2026-01-30 07:04:54.617525 | TASK [upload-logs : Upload logs to log server] 2026-01-30 07:04:55.197913 | localhost | Output suppressed because no_log was given 2026-01-30 07:04:55.202150 | 2026-01-30 07:04:55.202363 | LOOP [upload-logs : Compress console log and json output] 2026-01-30 07:04:55.266616 | localhost | skipping: Conditional result was False 2026-01-30 07:04:55.272312 | localhost | skipping: Conditional result was False 2026-01-30 07:04:55.282335 | 2026-01-30 07:04:55.282485 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-30 07:04:55.332249 | localhost | skipping: Conditional result was False 2026-01-30 07:04:55.332536 | 2026-01-30 07:04:55.338108 | localhost | skipping: Conditional result was False 2026-01-30 07:04:55.350628 | 2026-01-30 07:04:55.350775 | LOOP [upload-logs : Upload console log and json output]